Jan 13 20:37:12.031733 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:37:12.031776 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:37:12.031791 kernel: BIOS-provided physical RAM map: Jan 13 20:37:12.031802 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:37:12.031812 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:37:12.031822 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:37:12.031837 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:37:12.031848 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:37:12.031859 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:37:12.031869 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:37:12.031880 kernel: NX (Execute Disable) protection: active Jan 13 20:37:12.031890 kernel: APIC: Static calls initialized Jan 13 20:37:12.031900 kernel: SMBIOS 2.7 present. Jan 13 20:37:12.031911 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:37:12.031927 kernel: Hypervisor detected: KVM Jan 13 20:37:12.031939 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:37:12.031950 kernel: kvm-clock: using sched offset of 8088237924 cycles Jan 13 20:37:12.032015 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:37:12.032030 kernel: tsc: Detected 2499.994 MHz processor Jan 13 20:37:12.032043 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:37:12.032104 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:37:12.032121 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:37:12.032134 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:37:12.032146 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:37:12.032158 kernel: Using GB pages for direct mapping Jan 13 20:37:12.032170 kernel: ACPI: Early table checksum verification disabled Jan 13 20:37:12.032182 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:37:12.032194 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:37:12.032205 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:37:12.032402 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:37:12.032421 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:37:12.032470 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:37:12.032484 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:37:12.032496 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:37:12.032508 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:37:12.032520 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:37:12.032532 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:37:12.032638 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:37:12.032650 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:37:12.032666 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:37:12.032694 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:37:12.032708 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:37:12.032719 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:37:12.032733 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:37:12.033969 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:37:12.033989 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:37:12.034044 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:37:12.034060 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:37:12.034076 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:37:12.034091 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:37:12.034106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:37:12.034121 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:37:12.034136 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:37:12.034156 kernel: Zone ranges: Jan 13 20:37:12.034172 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:37:12.034187 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:37:12.034202 kernel: Normal empty Jan 13 20:37:12.034218 kernel: Movable zone start for each node Jan 13 20:37:12.034233 kernel: Early memory node ranges Jan 13 20:37:12.034247 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:37:12.034263 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:37:12.034278 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:37:12.034293 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:37:12.034311 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:37:12.034326 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:37:12.034341 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:37:12.034356 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:37:12.034371 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:37:12.034386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:37:12.034401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:37:12.034417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:37:12.034432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:37:12.034450 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:37:12.034465 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:37:12.034480 kernel: TSC deadline timer available Jan 13 20:37:12.034495 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:37:12.034510 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:37:12.034525 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:37:12.034540 kernel: Booting paravirtualized kernel on KVM Jan 13 20:37:12.034555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:37:12.034571 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:37:12.034589 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:37:12.034604 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:37:12.034617 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:37:12.034629 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:37:12.034643 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:37:12.034661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:37:12.034677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:37:12.034708 kernel: random: crng init done Jan 13 20:37:12.034724 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:37:12.034737 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:37:12.034750 kernel: Fallback order for Node 0: 0 Jan 13 20:37:12.034763 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:37:12.034776 kernel: Policy zone: DMA32 Jan 13 20:37:12.034789 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:37:12.034803 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127200K reserved, 0K cma-reserved) Jan 13 20:37:12.034816 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:37:12.034829 kernel: Kernel/User page tables isolation: enabled Jan 13 20:37:12.034846 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:37:12.034859 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:37:12.034872 kernel: Dynamic Preempt: voluntary Jan 13 20:37:12.034886 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:37:12.034905 kernel: rcu: RCU event tracing is enabled. Jan 13 20:37:12.034919 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:37:12.034933 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:37:12.034946 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:37:12.034959 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:37:12.034976 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:37:12.034989 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:37:12.035003 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:37:12.035016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:37:12.035122 kernel: Console: colour VGA+ 80x25 Jan 13 20:37:12.035137 kernel: printk: console [ttyS0] enabled Jan 13 20:37:12.035150 kernel: ACPI: Core revision 20230628 Jan 13 20:37:12.035165 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:37:12.035179 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:37:12.035196 kernel: x2apic enabled Jan 13 20:37:12.035210 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:37:12.035236 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 13 20:37:12.035254 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Jan 13 20:37:12.035269 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:37:12.035284 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:37:12.035299 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:37:12.035314 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:37:12.035328 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:37:12.035342 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:37:12.035357 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:37:12.035372 kernel: RETBleed: Vulnerable Jan 13 20:37:12.035387 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:37:12.035405 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:37:12.035420 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:37:12.035434 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:37:12.035449 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:37:12.035463 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:37:12.035479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:37:12.035496 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:37:12.035512 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:37:12.035526 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:37:12.035540 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:37:12.035554 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:37:12.035569 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:37:12.035585 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:37:12.035600 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:37:12.035614 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:37:12.035630 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:37:12.035644 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:37:12.035661 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:37:12.035675 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:37:12.035705 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:37:12.035718 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:37:12.035731 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:37:12.035745 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:37:12.035761 kernel: landlock: Up and running. Jan 13 20:37:12.035774 kernel: SELinux: Initializing. Jan 13 20:37:12.035788 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:37:12.035803 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:37:12.035818 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 20:37:12.035833 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:37:12.035853 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:37:12.035868 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:37:12.035883 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:37:12.035897 kernel: signal: max sigframe size: 3632 Jan 13 20:37:12.035910 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:37:12.035927 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:37:12.035942 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:37:12.035957 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:37:12.035975 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:37:12.035990 kernel: .... node #0, CPUs: #1 Jan 13 20:37:12.036007 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:37:12.036023 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:37:12.036037 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:37:12.036051 kernel: smpboot: Max logical packages: 1 Jan 13 20:37:12.036114 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Jan 13 20:37:12.036128 kernel: devtmpfs: initialized Jan 13 20:37:12.036142 kernel: x86/mm: Memory block size: 128MB Jan 13 20:37:12.036161 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:37:12.036176 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:37:12.036191 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:37:12.036206 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:37:12.036221 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:37:12.036237 kernel: audit: type=2000 audit(1736800631.520:1): state=initialized audit_enabled=0 res=1 Jan 13 20:37:12.036251 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:37:12.036263 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:37:12.036276 kernel: cpuidle: using governor menu Jan 13 20:37:12.036294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:37:12.036308 kernel: dca service started, version 1.12.1 Jan 13 20:37:12.036322 kernel: PCI: Using configuration type 1 for base access Jan 13 20:37:12.036336 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:37:12.036351 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:37:12.036367 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:37:12.036382 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:37:12.036396 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:37:12.036413 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:37:12.036432 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:37:12.036448 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:37:12.036464 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:37:12.036481 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:37:12.036496 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:37:12.036512 kernel: ACPI: Interpreter enabled Jan 13 20:37:12.036529 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:37:12.036546 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:37:12.036561 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:37:12.036580 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:37:12.036596 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:37:12.036612 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:37:12.036876 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:37:12.037026 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:37:12.037159 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:37:12.037177 kernel: acpiphp: Slot [3] registered Jan 13 20:37:12.037196 kernel: acpiphp: Slot [4] registered Jan 13 20:37:12.037210 kernel: acpiphp: Slot [5] registered Jan 13 20:37:12.037223 kernel: acpiphp: Slot [6] registered Jan 13 20:37:12.037237 kernel: acpiphp: Slot [7] registered Jan 13 20:37:12.037251 kernel: acpiphp: Slot [8] registered Jan 13 20:37:12.037265 kernel: acpiphp: Slot [9] registered Jan 13 20:37:12.037278 kernel: acpiphp: Slot [10] registered Jan 13 20:37:12.037292 kernel: acpiphp: Slot [11] registered Jan 13 20:37:12.037305 kernel: acpiphp: Slot [12] registered Jan 13 20:37:12.037319 kernel: acpiphp: Slot [13] registered Jan 13 20:37:12.037334 kernel: acpiphp: Slot [14] registered Jan 13 20:37:12.037348 kernel: acpiphp: Slot [15] registered Jan 13 20:37:12.037362 kernel: acpiphp: Slot [16] registered Jan 13 20:37:12.037376 kernel: acpiphp: Slot [17] registered Jan 13 20:37:12.037389 kernel: acpiphp: Slot [18] registered Jan 13 20:37:12.037402 kernel: acpiphp: Slot [19] registered Jan 13 20:37:12.037415 kernel: acpiphp: Slot [20] registered Jan 13 20:37:12.037428 kernel: acpiphp: Slot [21] registered Jan 13 20:37:12.037442 kernel: acpiphp: Slot [22] registered Jan 13 20:37:12.037458 kernel: acpiphp: Slot [23] registered Jan 13 20:37:12.037472 kernel: acpiphp: Slot [24] registered Jan 13 20:37:12.037485 kernel: acpiphp: Slot [25] registered Jan 13 20:37:12.037498 kernel: acpiphp: Slot [26] registered Jan 13 20:37:12.037511 kernel: acpiphp: Slot [27] registered Jan 13 20:37:12.037524 kernel: acpiphp: Slot [28] registered Jan 13 20:37:12.037537 kernel: acpiphp: Slot [29] registered Jan 13 20:37:12.037550 kernel: acpiphp: Slot [30] registered Jan 13 20:37:12.037564 kernel: acpiphp: Slot [31] registered Jan 13 20:37:12.037578 kernel: PCI host bridge to bus 0000:00 Jan 13 20:37:12.038640 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:37:12.038810 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:37:12.038936 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:37:12.039056 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:37:12.039174 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:37:12.039327 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:37:12.039480 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:37:12.039629 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:37:12.039781 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:37:12.039917 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:37:12.040147 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:37:12.040292 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:37:12.040425 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:37:12.040629 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:37:12.040778 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:37:12.040921 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:37:12.041344 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:37:12.041605 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:37:12.041907 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:37:12.042215 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:37:12.042379 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:37:12.042516 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:37:12.042658 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:37:12.042816 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:37:12.042836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:37:12.042851 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:37:12.042867 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:37:12.042885 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:37:12.042900 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:37:12.042915 kernel: iommu: Default domain type: Translated Jan 13 20:37:12.042931 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:37:12.042947 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:37:12.042964 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:37:12.042979 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:37:12.042995 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:37:12.043137 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:37:12.043279 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:37:12.043410 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:37:12.043428 kernel: vgaarb: loaded Jan 13 20:37:12.043442 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:37:12.043456 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:37:12.043469 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:37:12.043482 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:37:12.043496 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:37:12.043514 kernel: pnp: PnP ACPI init Jan 13 20:37:12.043529 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:37:12.043544 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:37:12.043559 kernel: NET: Registered PF_INET protocol family Jan 13 20:37:12.043573 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:37:12.043586 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:37:12.043600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:37:12.043615 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:37:12.043629 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:37:12.043648 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:37:12.043663 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:37:12.043678 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:37:12.043716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:37:12.043731 kernel: NET: Registered PF_XDP protocol family Jan 13 20:37:12.043865 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:37:12.043987 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:37:12.044112 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:37:12.044240 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:37:12.044386 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:37:12.044408 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:37:12.044424 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:37:12.044438 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 13 20:37:12.044452 kernel: clocksource: Switched to clocksource tsc Jan 13 20:37:12.044466 kernel: Initialise system trusted keyrings Jan 13 20:37:12.044479 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:37:12.044498 kernel: Key type asymmetric registered Jan 13 20:37:12.044514 kernel: Asymmetric key parser 'x509' registered Jan 13 20:37:12.044528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:37:12.044544 kernel: io scheduler mq-deadline registered Jan 13 20:37:12.044560 kernel: io scheduler kyber registered Jan 13 20:37:12.044574 kernel: io scheduler bfq registered Jan 13 20:37:12.044589 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:37:12.044604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:37:12.044619 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:37:12.044638 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:37:12.044652 kernel: i8042: Warning: Keylock active Jan 13 20:37:12.044668 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:37:12.044716 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:37:12.044892 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:37:12.045035 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:37:12.045175 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:37:11 UTC (1736800631) Jan 13 20:37:12.045310 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:37:12.045336 kernel: intel_pstate: CPU model not supported Jan 13 20:37:12.045353 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:37:12.045369 kernel: Segment Routing with IPv6 Jan 13 20:37:12.045385 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:37:12.045401 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:37:12.045417 kernel: Key type dns_resolver registered Jan 13 20:37:12.045433 kernel: IPI shorthand broadcast: enabled Jan 13 20:37:12.045449 kernel: sched_clock: Marking stable (567036152, 202588524)->(849696140, -80071464) Jan 13 20:37:12.045466 kernel: registered taskstats version 1 Jan 13 20:37:12.045487 kernel: Loading compiled-in X.509 certificates Jan 13 20:37:12.045503 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:37:12.045520 kernel: Key type .fscrypt registered Jan 13 20:37:12.045536 kernel: Key type fscrypt-provisioning registered Jan 13 20:37:12.045552 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:37:12.045568 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:37:12.045584 kernel: ima: No architecture policies found Jan 13 20:37:12.045600 kernel: clk: Disabling unused clocks Jan 13 20:37:12.045617 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:37:12.045637 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:37:12.045653 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:37:12.045670 kernel: Run /init as init process Jan 13 20:37:12.045725 kernel: with arguments: Jan 13 20:37:12.045740 kernel: /init Jan 13 20:37:12.045755 kernel: with environment: Jan 13 20:37:12.045769 kernel: HOME=/ Jan 13 20:37:12.045839 kernel: TERM=linux Jan 13 20:37:12.045858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:37:12.045885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:37:12.045917 systemd[1]: Detected virtualization amazon. Jan 13 20:37:12.045936 systemd[1]: Detected architecture x86-64. Jan 13 20:37:12.045952 systemd[1]: Running in initrd. Jan 13 20:37:12.045968 systemd[1]: No hostname configured, using default hostname. Jan 13 20:37:12.045986 systemd[1]: Hostname set to . Jan 13 20:37:12.046003 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:37:12.046020 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:37:12.046036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:37:12.046053 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:37:12.046071 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:37:12.046087 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:37:12.046103 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:37:12.046122 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:37:12.046141 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:37:12.046157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:37:12.046174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:37:12.046191 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:37:12.046206 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:37:12.046223 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:37:12.046242 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:37:12.046258 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:37:12.046274 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:37:12.046290 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:37:12.046307 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:37:12.046324 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:37:12.046340 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:37:12.046357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:37:12.046376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:37:12.046392 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:37:12.046409 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:37:12.046425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:37:12.046441 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:37:12.046458 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:37:12.046474 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:37:12.046493 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:37:12.046513 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:37:12.046531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:12.046549 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:37:12.046604 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:37:12.046642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:37:12.046659 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:37:12.046675 systemd-journald[179]: Journal started Jan 13 20:37:12.046747 systemd-journald[179]: Runtime Journal (/run/log/journal/ec223f7f72347b528404ef42d63a851b) is 4.8M, max 38.5M, 33.7M free. Jan 13 20:37:12.053662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:37:12.059747 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:37:12.059106 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:37:12.079974 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:37:12.087743 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:37:12.112219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:37:12.137706 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:37:12.142721 kernel: Bridge firewalling registered Jan 13 20:37:12.140824 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:37:12.141571 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:37:12.143182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:37:12.149256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:37:12.163672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:37:12.270020 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:12.283989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:37:12.287515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:12.303627 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:37:12.323331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:12.348392 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:37:12.381233 dracut-cmdline[216]: dracut-dracut-053 Jan 13 20:37:12.386330 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:37:12.395130 systemd-resolved[206]: Positive Trust Anchors: Jan 13 20:37:12.395151 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:37:12.395216 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:37:12.399857 systemd-resolved[206]: Defaulting to hostname 'linux'. Jan 13 20:37:12.401631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:37:12.403550 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:37:12.527717 kernel: SCSI subsystem initialized Jan 13 20:37:12.542714 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:37:12.557724 kernel: iscsi: registered transport (tcp) Jan 13 20:37:12.586916 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:37:12.587003 kernel: QLogic iSCSI HBA Driver Jan 13 20:37:12.644026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:37:12.651994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:37:12.700017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:37:12.700198 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:37:12.700225 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:37:12.771764 kernel: raid6: avx512x4 gen() 11872 MB/s Jan 13 20:37:12.788799 kernel: raid6: avx512x2 gen() 10246 MB/s Jan 13 20:37:12.805732 kernel: raid6: avx512x1 gen() 3915 MB/s Jan 13 20:37:12.823740 kernel: raid6: avx2x4 gen() 4845 MB/s Jan 13 20:37:12.844964 kernel: raid6: avx2x2 gen() 3720 MB/s Jan 13 20:37:12.861854 kernel: raid6: avx2x1 gen() 3896 MB/s Jan 13 20:37:12.861933 kernel: raid6: using algorithm avx512x4 gen() 11872 MB/s Jan 13 20:37:12.879857 kernel: raid6: .... xor() 4469 MB/s, rmw enabled Jan 13 20:37:12.879946 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:37:12.906718 kernel: xor: automatically using best checksumming function avx Jan 13 20:37:13.088712 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:37:13.101039 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:37:13.107939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:37:13.133060 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 13 20:37:13.138524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:37:13.152194 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:37:13.170848 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 13 20:37:13.219658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:37:13.227975 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:37:13.339839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:37:13.361202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:37:13.412797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:37:13.417532 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:37:13.419897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:37:13.426824 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:37:13.436915 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:37:13.473706 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:37:13.502752 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:37:13.521342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:37:13.523757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:13.532905 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:37:13.566666 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:37:13.567218 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:37:13.567243 kernel: AES CTR mode by8 optimization enabled Jan 13 20:37:13.567264 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:37:13.567438 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:84:6c:66:ab:61 Jan 13 20:37:13.532826 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:37:13.534722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:37:13.534815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:13.536124 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:13.753081 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:37:13.753362 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:37:13.753383 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:37:13.753530 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:37:13.753550 kernel: GPT:9289727 != 16777215 Jan 13 20:37:13.753567 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:37:13.753584 kernel: GPT:9289727 != 16777215 Jan 13 20:37:13.753600 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:37:13.753616 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:37:13.545218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:13.580142 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:37:13.758155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:13.769026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:37:13.786717 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (460) Jan 13 20:37:13.812192 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (454) Jan 13 20:37:13.842118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:13.919411 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:37:13.928548 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:37:13.937423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:37:13.943810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:37:13.943966 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:37:13.965989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:37:13.992667 disk-uuid[632]: Primary Header is updated. Jan 13 20:37:13.992667 disk-uuid[632]: Secondary Entries is updated. Jan 13 20:37:13.992667 disk-uuid[632]: Secondary Header is updated. Jan 13 20:37:13.998768 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:37:15.009711 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:37:15.011542 disk-uuid[633]: The operation has completed successfully. Jan 13 20:37:15.171520 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:37:15.171647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:37:15.202909 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:37:15.207156 sh[893]: Success Jan 13 20:37:15.229717 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:37:15.370310 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:37:15.389538 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:37:15.394886 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:37:15.443854 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:37:15.444197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:15.444231 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:37:15.445111 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:37:15.445846 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:37:15.549723 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:37:15.576440 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:37:15.579320 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:37:15.593112 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:37:15.597989 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:37:15.637638 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:37:15.637732 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:15.637756 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:37:15.647724 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:37:15.671194 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:37:15.674916 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:37:15.684067 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:37:15.696248 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:37:15.784751 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:37:15.798194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:37:15.889123 systemd-networkd[1085]: lo: Link UP Jan 13 20:37:15.889135 systemd-networkd[1085]: lo: Gained carrier Jan 13 20:37:15.893253 systemd-networkd[1085]: Enumeration completed Jan 13 20:37:15.893779 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:15.893784 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:37:15.894830 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:37:15.899149 systemd[1]: Reached target network.target - Network. Jan 13 20:37:15.904333 systemd-networkd[1085]: eth0: Link UP Jan 13 20:37:15.904342 systemd-networkd[1085]: eth0: Gained carrier Jan 13 20:37:15.904360 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:15.919864 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.25.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:37:16.149038 ignition[1020]: Ignition 2.20.0 Jan 13 20:37:16.149054 ignition[1020]: Stage: fetch-offline Jan 13 20:37:16.149368 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:16.149384 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:16.149997 ignition[1020]: Ignition finished successfully Jan 13 20:37:16.154581 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:37:16.163346 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:37:16.181621 ignition[1095]: Ignition 2.20.0 Jan 13 20:37:16.181633 ignition[1095]: Stage: fetch Jan 13 20:37:16.182400 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:16.182412 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:16.182499 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:16.198374 ignition[1095]: PUT result: OK Jan 13 20:37:16.201047 ignition[1095]: parsed url from cmdline: "" Jan 13 20:37:16.201061 ignition[1095]: no config URL provided Jan 13 20:37:16.201073 ignition[1095]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:37:16.201090 ignition[1095]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:37:16.201153 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:16.202551 ignition[1095]: PUT result: OK Jan 13 20:37:16.202604 ignition[1095]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:37:16.204890 ignition[1095]: GET result: OK Jan 13 20:37:16.204950 ignition[1095]: parsing config with SHA512: a622c36270fbf4414c836e3692baf9c6ddb22338cf328b9704f4ecab2be820941464f6c9cd97abd6b90a240d95a943be9006396c847940dfe6897e41ddf6cc46 Jan 13 20:37:16.212221 unknown[1095]: fetched base config from "system" Jan 13 20:37:16.212236 unknown[1095]: fetched base config from "system" Jan 13 20:37:16.212584 ignition[1095]: fetch: fetch complete Jan 13 20:37:16.212243 unknown[1095]: fetched user config from "aws" Jan 13 20:37:16.212591 ignition[1095]: fetch: fetch passed Jan 13 20:37:16.212647 ignition[1095]: Ignition finished successfully Jan 13 20:37:16.217232 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:37:16.228959 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:37:16.247225 ignition[1101]: Ignition 2.20.0 Jan 13 20:37:16.247240 ignition[1101]: Stage: kargs Jan 13 20:37:16.247682 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:16.247726 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:16.247853 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:16.249237 ignition[1101]: PUT result: OK Jan 13 20:37:16.255418 ignition[1101]: kargs: kargs passed Jan 13 20:37:16.255497 ignition[1101]: Ignition finished successfully Jan 13 20:37:16.258174 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:37:16.265089 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:37:16.284783 ignition[1108]: Ignition 2.20.0 Jan 13 20:37:16.284799 ignition[1108]: Stage: disks Jan 13 20:37:16.285816 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:16.285835 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:16.286010 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:16.287653 ignition[1108]: PUT result: OK Jan 13 20:37:16.301841 ignition[1108]: disks: disks passed Jan 13 20:37:16.303258 ignition[1108]: Ignition finished successfully Jan 13 20:37:16.304780 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:37:16.308762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:37:16.310457 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:37:16.315607 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:37:16.322959 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:37:16.332088 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:37:16.344997 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:37:16.391940 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:37:16.398365 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:37:16.413863 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:37:16.544732 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:37:16.545397 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:37:16.550154 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:37:16.569829 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:37:16.580976 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:37:16.582419 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:37:16.582470 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:37:16.582498 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:37:16.595287 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:37:16.603999 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:37:16.613709 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1135) Jan 13 20:37:16.616182 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:37:16.616240 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:16.616260 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:37:16.632744 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:37:16.634060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:37:16.955390 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:37:16.975480 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:37:16.986875 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:37:17.014419 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:37:17.318007 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:37:17.323851 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:37:17.326391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:37:17.342190 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:37:17.343333 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:37:17.392710 ignition[1247]: INFO : Ignition 2.20.0 Jan 13 20:37:17.392710 ignition[1247]: INFO : Stage: mount Jan 13 20:37:17.392710 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:17.392710 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:17.397746 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:17.399198 ignition[1247]: INFO : PUT result: OK Jan 13 20:37:17.400331 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:37:17.408232 ignition[1247]: INFO : mount: mount passed Jan 13 20:37:17.409556 ignition[1247]: INFO : Ignition finished successfully Jan 13 20:37:17.411405 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:37:17.419054 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:37:17.552943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:37:17.603743 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1261) Jan 13 20:37:17.606919 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:37:17.606987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:17.607019 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:37:17.612328 systemd-networkd[1085]: eth0: Gained IPv6LL Jan 13 20:37:17.618707 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:37:17.624016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:37:17.666627 ignition[1278]: INFO : Ignition 2.20.0 Jan 13 20:37:17.666627 ignition[1278]: INFO : Stage: files Jan 13 20:37:17.668908 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:17.668908 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:17.668908 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:17.675808 ignition[1278]: INFO : PUT result: OK Jan 13 20:37:17.678798 ignition[1278]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:37:17.681746 ignition[1278]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:37:17.681746 ignition[1278]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:37:17.705567 ignition[1278]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:37:17.707724 ignition[1278]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:37:17.710011 unknown[1278]: wrote ssh authorized keys file for user: core Jan 13 20:37:17.711736 ignition[1278]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:37:17.729251 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:37:17.732219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:37:18.232653 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:37:18.815529 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:37:18.819060 ignition[1278]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:37:18.819060 ignition[1278]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:37:18.819060 ignition[1278]: INFO : files: files passed Jan 13 20:37:18.819060 ignition[1278]: INFO : Ignition finished successfully Jan 13 20:37:18.820563 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:37:18.834001 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:37:18.839950 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:37:18.856752 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:37:18.856986 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:37:18.875164 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:37:18.878448 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:37:18.880483 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:37:18.883272 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:37:18.888618 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:37:18.899991 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:37:19.006801 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:37:19.006932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:37:19.009421 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:37:19.011773 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:37:19.014243 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:37:19.027785 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:37:19.048008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:37:19.059926 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:37:19.106640 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:37:19.110656 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:37:19.112246 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:37:19.114664 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:37:19.115039 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:37:19.119490 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:37:19.123853 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:37:19.125233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:37:19.127798 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:37:19.131837 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:37:19.138429 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:37:19.142625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:37:19.145597 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:37:19.148216 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:37:19.149928 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:37:19.158173 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:37:19.158317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:37:19.162763 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:37:19.165029 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:37:19.172317 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:37:19.177184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:37:19.181571 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:37:19.181765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:37:19.187257 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:37:19.189136 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:37:19.192648 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:37:19.193851 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:37:19.204561 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:37:19.222016 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:37:19.223476 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:37:19.223702 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:37:19.227251 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:37:19.229272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:37:19.246626 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:37:19.246969 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:37:19.261622 ignition[1330]: INFO : Ignition 2.20.0 Jan 13 20:37:19.263129 ignition[1330]: INFO : Stage: umount Jan 13 20:37:19.263129 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:19.263129 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:37:19.263129 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:37:19.269835 ignition[1330]: INFO : PUT result: OK Jan 13 20:37:19.273584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:37:19.279726 ignition[1330]: INFO : umount: umount passed Jan 13 20:37:19.281489 ignition[1330]: INFO : Ignition finished successfully Jan 13 20:37:19.284071 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:37:19.284193 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:37:19.287257 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:37:19.287314 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:37:19.289487 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:37:19.289543 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:37:19.293178 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:37:19.293257 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:37:19.297048 systemd[1]: Stopped target network.target - Network. Jan 13 20:37:19.299426 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:37:19.299525 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:37:19.301758 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:37:19.304077 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:37:19.307949 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:37:19.308057 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:37:19.312038 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:37:19.314133 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:37:19.314202 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:37:19.317463 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:37:19.317587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:37:19.319547 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:37:19.319628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:37:19.321643 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:37:19.321732 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:37:19.323703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:37:19.327313 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:37:19.333750 systemd-networkd[1085]: eth0: DHCPv6 lease lost Jan 13 20:37:19.336078 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:37:19.336215 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:37:19.341673 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:37:19.341921 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:37:19.353933 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:37:19.356168 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:37:19.356269 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:37:19.366159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:37:19.376061 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:37:19.376222 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:37:19.392447 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:37:19.394230 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:37:19.401344 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:37:19.401431 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:37:19.404905 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:37:19.404962 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:37:19.406410 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:37:19.406519 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:37:19.409751 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:37:19.409885 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:37:19.413401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:37:19.413481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:19.431382 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:37:19.432786 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:37:19.432888 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:19.435359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:37:19.435441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:37:19.437659 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:37:19.437751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:37:19.443909 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:37:19.444073 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:37:19.447679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:37:19.447767 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:37:19.450702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:37:19.452034 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:37:19.461291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:37:19.461359 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:19.466380 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:37:19.467641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:37:19.472275 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:37:19.473805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:37:19.476312 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:37:19.477619 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:37:19.489229 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:37:19.495010 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:37:19.495386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:37:19.511120 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:37:19.533856 systemd[1]: Switching root. Jan 13 20:37:19.580119 systemd-journald[179]: Journal stopped Jan 13 20:37:21.877640 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:37:21.883561 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:37:21.883592 kernel: SELinux: policy capability open_perms=1 Jan 13 20:37:21.883620 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:37:21.883643 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:37:21.883670 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:37:21.883887 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:37:21.883912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:37:21.883929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:37:21.883946 kernel: audit: type=1403 audit(1736800640.095:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:37:21.883972 systemd[1]: Successfully loaded SELinux policy in 70.002ms. Jan 13 20:37:21.883994 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.326ms. Jan 13 20:37:21.884015 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:37:21.887427 systemd[1]: Detected virtualization amazon. Jan 13 20:37:21.887488 systemd[1]: Detected architecture x86-64. Jan 13 20:37:21.887511 systemd[1]: Detected first boot. Jan 13 20:37:21.887542 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:37:21.887565 zram_generator::config[1373]: No configuration found. Jan 13 20:37:21.887588 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:37:21.887608 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:37:21.887628 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:37:21.887651 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:37:21.887672 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:37:21.892763 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:37:21.892807 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:37:21.892825 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:37:21.892853 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:37:21.892875 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:37:21.892894 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:37:21.892920 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:37:21.892940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:37:21.892961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:37:21.893080 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:37:21.893105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:37:21.893125 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:37:21.893146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:37:21.893167 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:37:21.893189 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:37:21.893218 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:37:21.893239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:37:21.893261 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:37:21.893284 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:37:21.893303 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:37:21.893329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:37:21.893351 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:37:21.893373 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:37:21.893399 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:37:21.893420 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:37:21.893441 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:37:21.893463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:37:21.893483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:37:21.893502 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:37:21.893521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:37:21.893541 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:37:21.893560 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:37:21.893584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:21.893603 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:37:21.893626 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:37:21.893649 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:37:21.893674 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:37:21.907826 systemd[1]: Reached target machines.target - Containers. Jan 13 20:37:21.907864 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:37:21.907887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:21.907918 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:37:21.907939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:37:21.907960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:21.907980 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:37:21.908000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:21.908023 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:37:21.908044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:21.908065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:37:21.908085 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:37:21.908110 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:37:21.908129 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:37:21.908149 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:37:21.908169 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:37:21.908188 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:37:21.908208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:37:21.908228 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:37:21.908250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:37:21.908270 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:37:21.908294 systemd[1]: Stopped verity-setup.service. Jan 13 20:37:21.908316 kernel: fuse: init (API version 7.39) Jan 13 20:37:21.908339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:21.908361 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:37:21.908384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:37:21.908410 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:37:21.908432 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:37:21.908456 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:37:21.908481 kernel: ACPI: bus type drm_connector registered Jan 13 20:37:21.908502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:37:21.908526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:37:21.908548 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:37:21.908571 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:37:21.908597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:21.908619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:21.908640 kernel: loop: module loaded Jan 13 20:37:21.908660 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:37:21.908700 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:37:21.911299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:21.911375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:21.911407 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:37:21.911430 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:37:21.911450 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:21.911469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:21.911489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:37:21.911510 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:37:21.911530 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:37:21.911555 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:37:21.911577 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:37:21.911598 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:37:21.911621 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:37:21.911643 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:37:21.911666 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:37:21.911711 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:37:21.911825 systemd-journald[1452]: Collecting audit messages is disabled. Jan 13 20:37:21.911876 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:37:21.911899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:21.911923 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:37:21.911945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:37:21.911973 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:37:21.911995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:37:21.912017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:37:21.912039 systemd-journald[1452]: Journal started Jan 13 20:37:21.912081 systemd-journald[1452]: Runtime Journal (/run/log/journal/ec223f7f72347b528404ef42d63a851b) is 4.8M, max 38.5M, 33.7M free. Jan 13 20:37:21.917702 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:37:21.253239 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:37:21.284829 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:37:21.285352 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:37:21.941377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:37:21.941454 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:37:21.944613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:37:21.946425 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:37:21.948471 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:37:21.997052 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:37:22.000359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:37:22.002467 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:37:22.008633 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:37:22.021058 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:37:22.032025 kernel: loop0: detected capacity change from 0 to 141000 Jan 13 20:37:22.038254 systemd-journald[1452]: Time spent on flushing to /var/log/journal/ec223f7f72347b528404ef42d63a851b is 176.342ms for 947 entries. Jan 13 20:37:22.038254 systemd-journald[1452]: System Journal (/var/log/journal/ec223f7f72347b528404ef42d63a851b) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:37:22.231949 systemd-journald[1452]: Received client request to flush runtime journal. Jan 13 20:37:22.232042 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:37:22.052776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:22.056009 systemd-tmpfiles[1479]: ACLs are not supported, ignoring. Jan 13 20:37:22.056031 systemd-tmpfiles[1479]: ACLs are not supported, ignoring. Jan 13 20:37:22.142373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:37:22.163071 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:37:22.181001 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:37:22.183096 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:37:22.185848 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:37:22.198673 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:37:22.244874 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:37:22.243216 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:37:22.265124 udevadm[1516]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:37:22.291162 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:37:22.305029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:37:22.353234 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jan 13 20:37:22.353641 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jan 13 20:37:22.361561 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:37:22.370994 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:37:22.509056 kernel: loop3: detected capacity change from 0 to 62848 Jan 13 20:37:22.653718 kernel: loop4: detected capacity change from 0 to 141000 Jan 13 20:37:22.696286 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:37:22.736713 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 20:37:22.779715 kernel: loop7: detected capacity change from 0 to 62848 Jan 13 20:37:22.792243 (sd-merge)[1528]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:37:22.794892 (sd-merge)[1528]: Merged extensions into '/usr'. Jan 13 20:37:22.804190 systemd[1]: Reloading requested from client PID 1478 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:37:22.804336 systemd[1]: Reloading... Jan 13 20:37:22.930775 zram_generator::config[1553]: No configuration found. Jan 13 20:37:23.254574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:23.364648 systemd[1]: Reloading finished in 559 ms. Jan 13 20:37:23.399968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:37:23.401844 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:37:23.417962 systemd[1]: Starting ensure-sysext.service... Jan 13 20:37:23.422975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:37:23.441079 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:37:23.448545 systemd[1]: Reloading requested from client PID 1603 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:37:23.448578 systemd[1]: Reloading... Jan 13 20:37:23.484978 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:37:23.485911 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:37:23.488165 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:37:23.489039 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 13 20:37:23.489222 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 13 20:37:23.497576 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:37:23.497596 systemd-tmpfiles[1604]: Skipping /boot Jan 13 20:37:23.534621 systemd-udevd[1605]: Using default interface naming scheme 'v255'. Jan 13 20:37:23.549549 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:37:23.549571 systemd-tmpfiles[1604]: Skipping /boot Jan 13 20:37:23.564727 zram_generator::config[1629]: No configuration found. Jan 13 20:37:23.931326 ldconfig[1470]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:37:23.935975 (udev-worker)[1649]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:37:24.053595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:24.087707 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:37:24.099986 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1657) Jan 13 20:37:24.100015 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:37:24.117708 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:37:24.126717 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 20:37:24.137708 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:37:24.178729 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 20:37:24.217395 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:37:24.218130 systemd[1]: Reloading finished in 768 ms. Jan 13 20:37:24.242698 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:37:24.246964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:37:24.248960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:37:24.300279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.309370 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:37:24.324037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:37:24.327048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:24.335087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:24.338895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:24.349844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:24.351253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:24.361750 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:37:24.369635 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:37:24.375159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:37:24.390296 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:37:24.403633 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:37:24.405718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.426452 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:24.426717 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:24.436930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:24.437175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:24.441717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:24.442986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:24.479152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.480220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:24.494120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:24.504299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:24.523451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:24.526173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:24.541334 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:37:24.546661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:24.548789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.563387 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:37:24.565919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:24.567801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:24.600670 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:37:24.605412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.607050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:24.621576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:24.634328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:37:24.635110 augenrules[1830]: No rules Jan 13 20:37:24.635627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:24.637347 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:37:24.652154 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:37:24.653474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.657827 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:37:24.658271 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:37:24.660497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:24.661953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:24.666417 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:24.666708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:24.691699 systemd[1]: Finished ensure-sysext.service. Jan 13 20:37:24.711128 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:37:24.753164 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:37:24.781951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:37:24.786417 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:37:24.786760 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:37:24.790911 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:37:24.791496 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:37:24.805032 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:37:24.805667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:24.806597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:24.809997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:37:24.819314 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:37:24.819746 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:37:24.825976 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:37:24.848864 lvm[1848]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:37:24.871385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:37:24.894466 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:37:24.895991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:37:24.906663 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:37:24.928848 lvm[1860]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:37:24.980759 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:37:25.020322 systemd-resolved[1783]: Positive Trust Anchors: Jan 13 20:37:25.020817 systemd-resolved[1783]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:37:25.020893 systemd-resolved[1783]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:37:25.023170 systemd-networkd[1776]: lo: Link UP Jan 13 20:37:25.023180 systemd-networkd[1776]: lo: Gained carrier Jan 13 20:37:25.026788 systemd-networkd[1776]: Enumeration completed Jan 13 20:37:25.027617 systemd-networkd[1776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:25.027721 systemd-networkd[1776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:37:25.030276 systemd-networkd[1776]: eth0: Link UP Jan 13 20:37:25.030542 systemd-networkd[1776]: eth0: Gained carrier Jan 13 20:37:25.030625 systemd-networkd[1776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:25.040878 systemd-resolved[1783]: Defaulting to hostname 'linux'. Jan 13 20:37:25.045893 systemd-networkd[1776]: eth0: DHCPv4 address 172.31.25.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:37:25.084141 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:37:25.084594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:37:25.085299 systemd[1]: Reached target network.target - Network. Jan 13 20:37:25.085592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:37:25.094035 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:37:25.095996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:25.098711 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:37:25.101535 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:37:25.103568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:37:25.105274 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:37:25.106824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:37:25.108287 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:37:25.110203 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:37:25.110250 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:37:25.111762 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:37:25.115549 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:37:25.120093 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:37:25.131474 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:37:25.142631 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:37:25.144234 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:37:25.145429 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:37:25.146608 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:37:25.146814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:37:25.160009 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:37:25.164879 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:37:25.170958 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:37:25.181347 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:37:25.193075 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:37:25.195023 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:37:25.202015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:37:25.207551 jq[1872]: false Jan 13 20:37:25.227310 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:37:25.236626 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:37:25.273851 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:37:25.280037 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:37:25.297056 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:37:25.299209 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:37:25.299985 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:37:25.313755 extend-filesystems[1873]: Found loop4 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found loop5 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found loop6 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found loop7 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found nvme0n1 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found nvme0n1p1 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found nvme0n1p2 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found nvme0n1p3 Jan 13 20:37:25.313755 extend-filesystems[1873]: Found usr Jan 13 20:37:25.310899 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:37:25.376827 extend-filesystems[1873]: Found nvme0n1p4 Jan 13 20:37:25.376827 extend-filesystems[1873]: Found nvme0n1p6 Jan 13 20:37:25.376827 extend-filesystems[1873]: Found nvme0n1p7 Jan 13 20:37:25.376827 extend-filesystems[1873]: Found nvme0n1p9 Jan 13 20:37:25.376827 extend-filesystems[1873]: Checking size of /dev/nvme0n1p9 Jan 13 20:37:25.318889 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:37:25.328231 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:37:25.329844 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:37:25.333707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:37:25.334978 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:37:25.399662 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:37:25.397647 ntpd[1875]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:37:25.403063 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:37:25.403063 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:37:25.397677 ntpd[1875]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:37:25.399403 dbus-daemon[1871]: [system] SELinux support is enabled Jan 13 20:37:25.406740 ntpd[1875]: ---------------------------------------------------- Jan 13 20:37:25.408754 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: ---------------------------------------------------- Jan 13 20:37:25.408754 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:37:25.408754 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:37:25.408754 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: corporation. Support and training for ntp-4 are Jan 13 20:37:25.408754 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: available at https://www.nwtime.org/support Jan 13 20:37:25.408754 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: ---------------------------------------------------- Jan 13 20:37:25.406775 ntpd[1875]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:37:25.406786 ntpd[1875]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:37:25.406796 ntpd[1875]: corporation. Support and training for ntp-4 are Jan 13 20:37:25.406806 ntpd[1875]: available at https://www.nwtime.org/support Jan 13 20:37:25.406819 ntpd[1875]: ---------------------------------------------------- Jan 13 20:37:25.425878 ntpd[1875]: proto: precision = 0.067 usec (-24) Jan 13 20:37:25.426014 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: proto: precision = 0.067 usec (-24) Jan 13 20:37:25.454800 extend-filesystems[1873]: Resized partition /dev/nvme0n1p9 Jan 13 20:37:25.455893 jq[1890]: true Jan 13 20:37:25.431006 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:37:25.460282 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: basedate set to 2025-01-01 Jan 13 20:37:25.460282 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: gps base set to 2025-01-05 (week 2348) Jan 13 20:37:25.435107 dbus-daemon[1871]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1776 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:37:25.460638 extend-filesystems[1913]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:37:25.467213 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:37:25.431550 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:37:25.438464 ntpd[1875]: basedate set to 2025-01-01 Jan 13 20:37:25.463326 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:37:25.438492 ntpd[1875]: gps base set to 2025-01-05 (week 2348) Jan 13 20:37:25.463379 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:37:25.464873 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:37:25.464909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:37:25.475849 ntpd[1875]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:37:25.481794 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Listen normally on 3 eth0 172.31.25.143:123 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Listen normally on 4 lo [::1]:123 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: bind(21) AF_INET6 fe80::484:6cff:fe66:ab61%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: unable to create socket on eth0 (5) for fe80::484:6cff:fe66:ab61%2#123 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: failed to init interface for address fe80::484:6cff:fe66:ab61%2 Jan 13 20:37:25.483274 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: Listening on routing socket on fd #21 for interface updates Jan 13 20:37:25.475918 ntpd[1875]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:37:25.476121 ntpd[1875]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:37:25.476171 ntpd[1875]: Listen normally on 3 eth0 172.31.25.143:123 Jan 13 20:37:25.476211 ntpd[1875]: Listen normally on 4 lo [::1]:123 Jan 13 20:37:25.476263 ntpd[1875]: bind(21) AF_INET6 fe80::484:6cff:fe66:ab61%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:37:25.476287 ntpd[1875]: unable to create socket on eth0 (5) for fe80::484:6cff:fe66:ab61%2#123 Jan 13 20:37:25.476301 ntpd[1875]: failed to init interface for address fe80::484:6cff:fe66:ab61%2 Jan 13 20:37:25.476330 ntpd[1875]: Listening on routing socket on fd #21 for interface updates Jan 13 20:37:25.494716 ntpd[1875]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:37:25.501367 (ntainerd)[1905]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:37:25.502306 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:37:25.502306 ntpd[1875]: 13 Jan 20:37:25 ntpd[1875]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:37:25.494758 ntpd[1875]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:37:25.499409 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:37:25.510976 jq[1911]: true Jan 13 20:37:25.527986 update_engine[1888]: I20250113 20:37:25.527489 1888 main.cc:92] Flatcar Update Engine starting Jan 13 20:37:25.528923 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:37:25.536863 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:37:25.538145 update_engine[1888]: I20250113 20:37:25.538086 1888 update_check_scheduler.cc:74] Next update check in 2m49s Jan 13 20:37:25.548991 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:37:25.568859 coreos-metadata[1870]: Jan 13 20:37:25.566 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:37:25.569724 coreos-metadata[1870]: Jan 13 20:37:25.569 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:37:25.574521 coreos-metadata[1870]: Jan 13 20:37:25.574 INFO Fetch successful Jan 13 20:37:25.574521 coreos-metadata[1870]: Jan 13 20:37:25.574 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:37:25.575796 coreos-metadata[1870]: Jan 13 20:37:25.575 INFO Fetch successful Jan 13 20:37:25.575796 coreos-metadata[1870]: Jan 13 20:37:25.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:37:25.576481 coreos-metadata[1870]: Jan 13 20:37:25.576 INFO Fetch successful Jan 13 20:37:25.576481 coreos-metadata[1870]: Jan 13 20:37:25.576 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:37:25.578221 coreos-metadata[1870]: Jan 13 20:37:25.577 INFO Fetch successful Jan 13 20:37:25.578221 coreos-metadata[1870]: Jan 13 20:37:25.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:37:25.578221 coreos-metadata[1870]: Jan 13 20:37:25.577 INFO Fetch failed with 404: resource not found Jan 13 20:37:25.578221 coreos-metadata[1870]: Jan 13 20:37:25.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:37:25.579514 coreos-metadata[1870]: Jan 13 20:37:25.578 INFO Fetch successful Jan 13 20:37:25.579514 coreos-metadata[1870]: Jan 13 20:37:25.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:37:25.579514 coreos-metadata[1870]: Jan 13 20:37:25.579 INFO Fetch successful Jan 13 20:37:25.579514 coreos-metadata[1870]: Jan 13 20:37:25.579 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:37:25.583330 coreos-metadata[1870]: Jan 13 20:37:25.582 INFO Fetch successful Jan 13 20:37:25.583330 coreos-metadata[1870]: Jan 13 20:37:25.583 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:37:25.585502 coreos-metadata[1870]: Jan 13 20:37:25.585 INFO Fetch successful Jan 13 20:37:25.585502 coreos-metadata[1870]: Jan 13 20:37:25.585 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:37:25.594779 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:37:25.594962 coreos-metadata[1870]: Jan 13 20:37:25.586 INFO Fetch successful Jan 13 20:37:25.615000 systemd-logind[1885]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:37:25.618492 systemd-logind[1885]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 20:37:25.619993 extend-filesystems[1913]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:37:25.619993 extend-filesystems[1913]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:37:25.619993 extend-filesystems[1913]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:37:25.618543 systemd-logind[1885]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:37:25.634542 extend-filesystems[1873]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:37:25.618837 systemd-logind[1885]: New seat seat0. Jan 13 20:37:25.620211 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:37:25.624305 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:37:25.624718 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:37:25.706722 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:37:25.719360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:37:25.730709 bash[1946]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:37:25.733670 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:37:25.755814 systemd[1]: Starting sshkeys.service... Jan 13 20:37:25.793269 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:37:25.805003 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:37:25.845466 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1665) Jan 13 20:37:25.945227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:37:26.064438 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:37:26.064914 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:37:26.067652 dbus-daemon[1871]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1921 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:37:26.088341 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:37:26.174965 polkitd[2007]: Started polkitd version 121 Jan 13 20:37:26.210332 polkitd[2007]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:37:26.210425 polkitd[2007]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:37:26.215176 polkitd[2007]: Finished loading, compiling and executing 2 rules Jan 13 20:37:26.221354 locksmithd[1922]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:37:26.233356 coreos-metadata[1952]: Jan 13 20:37:26.233 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:37:26.236847 coreos-metadata[1952]: Jan 13 20:37:26.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:37:26.241956 coreos-metadata[1952]: Jan 13 20:37:26.239 INFO Fetch successful Jan 13 20:37:26.241956 coreos-metadata[1952]: Jan 13 20:37:26.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:37:26.241956 coreos-metadata[1952]: Jan 13 20:37:26.241 INFO Fetch successful Jan 13 20:37:26.244925 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:37:26.245175 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:37:26.247376 unknown[1952]: wrote ssh authorized keys file for user: core Jan 13 20:37:26.248265 polkitd[2007]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:37:26.308042 systemd-resolved[1783]: System hostname changed to 'ip-172-31-25-143'. Jan 13 20:37:26.308967 systemd-hostnamed[1921]: Hostname set to (transient) Jan 13 20:37:26.340678 update-ssh-keys[2051]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:37:26.345935 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:37:26.353182 systemd[1]: Finished sshkeys.service. Jan 13 20:37:26.407946 ntpd[1875]: bind(24) AF_INET6 fe80::484:6cff:fe66:ab61%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:37:26.408527 ntpd[1875]: 13 Jan 20:37:26 ntpd[1875]: bind(24) AF_INET6 fe80::484:6cff:fe66:ab61%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:37:26.408527 ntpd[1875]: 13 Jan 20:37:26 ntpd[1875]: unable to create socket on eth0 (6) for fe80::484:6cff:fe66:ab61%2#123 Jan 13 20:37:26.408527 ntpd[1875]: 13 Jan 20:37:26 ntpd[1875]: failed to init interface for address fe80::484:6cff:fe66:ab61%2 Jan 13 20:37:26.407995 ntpd[1875]: unable to create socket on eth0 (6) for fe80::484:6cff:fe66:ab61%2#123 Jan 13 20:37:26.408011 ntpd[1875]: failed to init interface for address fe80::484:6cff:fe66:ab61%2 Jan 13 20:37:26.443922 systemd-networkd[1776]: eth0: Gained IPv6LL Jan 13 20:37:26.456276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:37:26.461388 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:37:26.470817 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:37:26.483902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:26.492193 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:37:26.618158 containerd[1905]: time="2025-01-13T20:37:26.618005511Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:37:26.635863 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:37:26.701510 amazon-ssm-agent[2068]: Initializing new seelog logger Jan 13 20:37:26.701912 amazon-ssm-agent[2068]: New Seelog Logger Creation Complete Jan 13 20:37:26.702326 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.702326 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 processing appconfig overrides Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 processing appconfig overrides Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 processing appconfig overrides Jan 13 20:37:26.706167 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO Proxy environment variables: Jan 13 20:37:26.713733 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.713733 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:37:26.713733 amazon-ssm-agent[2068]: 2025/01/13 20:37:26 processing appconfig overrides Jan 13 20:37:26.722848 containerd[1905]: time="2025-01-13T20:37:26.721866883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.739248 containerd[1905]: time="2025-01-13T20:37:26.739187743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:26.739248 containerd[1905]: time="2025-01-13T20:37:26.739244005Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:37:26.739553 containerd[1905]: time="2025-01-13T20:37:26.739266742Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.740474468Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.740555438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.744850137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.744885405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.745135374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.745153659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.745171324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.745184975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.745842 containerd[1905]: time="2025-01-13T20:37:26.745268976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.748763 containerd[1905]: time="2025-01-13T20:37:26.748254887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:26.748763 containerd[1905]: time="2025-01-13T20:37:26.748509787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:26.748763 containerd[1905]: time="2025-01-13T20:37:26.748535784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:37:26.748763 containerd[1905]: time="2025-01-13T20:37:26.748644466Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:37:26.749784 containerd[1905]: time="2025-01-13T20:37:26.749750960Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:37:26.759030 containerd[1905]: time="2025-01-13T20:37:26.758983780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.759893958Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.759952212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.759978292Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.760130250Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.761800226Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.762965915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.763224491Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.763248038Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:37:26.763725 containerd[1905]: time="2025-01-13T20:37:26.763268158Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.763882032Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.763924097Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.763942929Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.763961493Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.764004180Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.764024786Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.764044216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.764087051Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:37:26.764231 containerd[1905]: time="2025-01-13T20:37:26.764117799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765764 containerd[1905]: time="2025-01-13T20:37:26.765733632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765854 containerd[1905]: time="2025-01-13T20:37:26.765785878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765854 containerd[1905]: time="2025-01-13T20:37:26.765809408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765854 containerd[1905]: time="2025-01-13T20:37:26.765829521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765982 containerd[1905]: time="2025-01-13T20:37:26.765865643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765982 containerd[1905]: time="2025-01-13T20:37:26.765887552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765982 containerd[1905]: time="2025-01-13T20:37:26.765907615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765982 containerd[1905]: time="2025-01-13T20:37:26.765942992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.765982 containerd[1905]: time="2025-01-13T20:37:26.765967193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766143 containerd[1905]: time="2025-01-13T20:37:26.765985919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766143 containerd[1905]: time="2025-01-13T20:37:26.766021117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766143 containerd[1905]: time="2025-01-13T20:37:26.766049577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766143 containerd[1905]: time="2025-01-13T20:37:26.766074491Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:37:26.766143 containerd[1905]: time="2025-01-13T20:37:26.766126601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766394 containerd[1905]: time="2025-01-13T20:37:26.766149292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766394 containerd[1905]: time="2025-01-13T20:37:26.766181862Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:37:26.766394 containerd[1905]: time="2025-01-13T20:37:26.766331051Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:37:26.766504 containerd[1905]: time="2025-01-13T20:37:26.766439071Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:37:26.766504 containerd[1905]: time="2025-01-13T20:37:26.766458180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:37:26.766504 containerd[1905]: time="2025-01-13T20:37:26.766476900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:37:26.766612 containerd[1905]: time="2025-01-13T20:37:26.766492026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.766612 containerd[1905]: time="2025-01-13T20:37:26.766530880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:37:26.766612 containerd[1905]: time="2025-01-13T20:37:26.766546887Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:37:26.766612 containerd[1905]: time="2025-01-13T20:37:26.766562412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:37:26.772180 containerd[1905]: time="2025-01-13T20:37:26.767266504Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:37:26.772180 containerd[1905]: time="2025-01-13T20:37:26.767345189Z" level=info msg="Connect containerd service" Jan 13 20:37:26.772180 containerd[1905]: time="2025-01-13T20:37:26.767476681Z" level=info msg="using legacy CRI server" Jan 13 20:37:26.772180 containerd[1905]: time="2025-01-13T20:37:26.767490808Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:37:26.772180 containerd[1905]: time="2025-01-13T20:37:26.769635133Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:37:26.776081 containerd[1905]: time="2025-01-13T20:37:26.776035010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:37:26.778704 containerd[1905]: time="2025-01-13T20:37:26.777345197Z" level=info msg="Start subscribing containerd event" Jan 13 20:37:26.778704 containerd[1905]: time="2025-01-13T20:37:26.777992273Z" level=info msg="Start recovering state" Jan 13 20:37:26.778704 containerd[1905]: time="2025-01-13T20:37:26.778092215Z" level=info msg="Start event monitor" Jan 13 20:37:26.778704 containerd[1905]: time="2025-01-13T20:37:26.778125520Z" level=info msg="Start snapshots syncer" Jan 13 20:37:26.778704 containerd[1905]: time="2025-01-13T20:37:26.778139379Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:37:26.778704 containerd[1905]: time="2025-01-13T20:37:26.778151239Z" level=info msg="Start streaming server" Jan 13 20:37:26.779119 containerd[1905]: time="2025-01-13T20:37:26.779046151Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:37:26.779256 containerd[1905]: time="2025-01-13T20:37:26.779233828Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:37:26.779435 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:37:26.789708 containerd[1905]: time="2025-01-13T20:37:26.788015629Z" level=info msg="containerd successfully booted in 0.174712s" Jan 13 20:37:26.807040 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO http_proxy: Jan 13 20:37:26.839954 sshd_keygen[1912]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:37:26.882646 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:37:26.894580 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:37:26.899236 systemd[1]: Started sshd@0-172.31.25.143:22-139.178.89.65:58930.service - OpenSSH per-connection server daemon (139.178.89.65:58930). Jan 13 20:37:26.909014 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO no_proxy: Jan 13 20:37:26.915839 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:37:26.916059 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:37:26.929406 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:37:26.967442 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:37:26.980102 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:37:26.990448 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:37:26.991871 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:37:27.006514 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO https_proxy: Jan 13 20:37:27.108864 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO Agent will take identity from EC2 Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [Registrar] Starting registrar module Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:26 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:27 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:27 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:27 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:37:27.129100 amazon-ssm-agent[2068]: 2025-01-13 20:37:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:37:27.167441 sshd[2098]: Accepted publickey for core from 139.178.89.65 port 58930 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:27.170503 sshd-session[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:27.191037 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:37:27.199080 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:37:27.204030 systemd-logind[1885]: New session 1 of user core. Jan 13 20:37:27.206364 amazon-ssm-agent[2068]: 2025-01-13 20:37:27 INFO [CredentialRefresher] Next credential rotation will be in 31.68332305525 minutes Jan 13 20:37:27.225663 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:37:27.239283 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:37:27.247258 (systemd)[2110]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:37:27.429068 systemd[2110]: Queued start job for default target default.target. Jan 13 20:37:27.445485 systemd[2110]: Created slice app.slice - User Application Slice. Jan 13 20:37:27.445538 systemd[2110]: Reached target paths.target - Paths. Jan 13 20:37:27.445561 systemd[2110]: Reached target timers.target - Timers. Jan 13 20:37:27.451171 systemd[2110]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:37:27.473413 systemd[2110]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:37:27.473749 systemd[2110]: Reached target sockets.target - Sockets. Jan 13 20:37:27.473899 systemd[2110]: Reached target basic.target - Basic System. Jan 13 20:37:27.474143 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:37:27.474824 systemd[2110]: Reached target default.target - Main User Target. Jan 13 20:37:27.474880 systemd[2110]: Startup finished in 218ms. Jan 13 20:37:27.483049 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:37:27.645658 systemd[1]: Started sshd@1-172.31.25.143:22-139.178.89.65:58942.service - OpenSSH per-connection server daemon (139.178.89.65:58942). Jan 13 20:37:27.820976 sshd[2121]: Accepted publickey for core from 139.178.89.65 port 58942 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:27.822297 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:27.829058 systemd-logind[1885]: New session 2 of user core. Jan 13 20:37:27.835913 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:37:27.972478 sshd[2123]: Connection closed by 139.178.89.65 port 58942 Jan 13 20:37:27.973787 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:27.980082 systemd[1]: sshd@1-172.31.25.143:22-139.178.89.65:58942.service: Deactivated successfully. Jan 13 20:37:27.982586 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:37:27.986151 systemd-logind[1885]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:37:27.988959 systemd-logind[1885]: Removed session 2. Jan 13 20:37:28.013286 systemd[1]: Started sshd@2-172.31.25.143:22-139.178.89.65:58956.service - OpenSSH per-connection server daemon (139.178.89.65:58956). Jan 13 20:37:28.147290 amazon-ssm-agent[2068]: 2025-01-13 20:37:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:37:28.191265 sshd[2128]: Accepted publickey for core from 139.178.89.65 port 58956 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:28.190016 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:28.203478 systemd-logind[1885]: New session 3 of user core. Jan 13 20:37:28.210197 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:37:28.248537 amazon-ssm-agent[2068]: 2025-01-13 20:37:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2131) started Jan 13 20:37:28.342965 sshd[2136]: Connection closed by 139.178.89.65 port 58956 Jan 13 20:37:28.344978 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:28.354579 amazon-ssm-agent[2068]: 2025-01-13 20:37:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:37:28.351170 systemd[1]: sshd@2-172.31.25.143:22-139.178.89.65:58956.service: Deactivated successfully. Jan 13 20:37:28.355852 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:37:28.359053 systemd-logind[1885]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:37:28.360422 systemd-logind[1885]: Removed session 3. Jan 13 20:37:28.848789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:28.851352 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:37:28.854633 systemd[1]: Startup finished in 717ms (kernel) + 8.354s (initrd) + 8.826s (userspace) = 17.898s. Jan 13 20:37:28.997529 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:29.026979 agetty[2105]: failed to open credentials directory Jan 13 20:37:29.026980 agetty[2104]: failed to open credentials directory Jan 13 20:37:29.407829 ntpd[1875]: Listen normally on 7 eth0 [fe80::484:6cff:fe66:ab61%2]:123 Jan 13 20:37:29.411061 ntpd[1875]: 13 Jan 20:37:29 ntpd[1875]: Listen normally on 7 eth0 [fe80::484:6cff:fe66:ab61%2]:123 Jan 13 20:37:30.808791 kubelet[2150]: E0113 20:37:30.808586 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:30.812111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:30.812313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:30.813030 systemd[1]: kubelet.service: Consumed 1.024s CPU time. Jan 13 20:37:33.201471 systemd-resolved[1783]: Clock change detected. Flushing caches. Jan 13 20:37:39.174638 systemd[1]: Started sshd@3-172.31.25.143:22-139.178.89.65:37704.service - OpenSSH per-connection server daemon (139.178.89.65:37704). Jan 13 20:37:39.335998 sshd[2163]: Accepted publickey for core from 139.178.89.65 port 37704 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:39.337625 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:39.342979 systemd-logind[1885]: New session 4 of user core. Jan 13 20:37:39.346011 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:37:39.475361 sshd[2165]: Connection closed by 139.178.89.65 port 37704 Jan 13 20:37:39.476356 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:39.482126 systemd[1]: sshd@3-172.31.25.143:22-139.178.89.65:37704.service: Deactivated successfully. Jan 13 20:37:39.486513 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:37:39.490078 systemd-logind[1885]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:37:39.494266 systemd-logind[1885]: Removed session 4. Jan 13 20:37:39.512297 systemd[1]: Started sshd@4-172.31.25.143:22-139.178.89.65:37720.service - OpenSSH per-connection server daemon (139.178.89.65:37720). Jan 13 20:37:39.684404 sshd[2170]: Accepted publickey for core from 139.178.89.65 port 37720 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:39.685433 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:39.695535 systemd-logind[1885]: New session 5 of user core. Jan 13 20:37:39.703445 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:37:39.824175 sshd[2172]: Connection closed by 139.178.89.65 port 37720 Jan 13 20:37:39.825554 sshd-session[2170]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:39.829429 systemd[1]: sshd@4-172.31.25.143:22-139.178.89.65:37720.service: Deactivated successfully. Jan 13 20:37:39.832361 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:37:39.834602 systemd-logind[1885]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:37:39.836679 systemd-logind[1885]: Removed session 5. Jan 13 20:37:39.864585 systemd[1]: Started sshd@5-172.31.25.143:22-139.178.89.65:37736.service - OpenSSH per-connection server daemon (139.178.89.65:37736). Jan 13 20:37:40.044045 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 37736 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:40.044682 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:40.050257 systemd-logind[1885]: New session 6 of user core. Jan 13 20:37:40.061091 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:37:40.182814 sshd[2179]: Connection closed by 139.178.89.65 port 37736 Jan 13 20:37:40.184012 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:40.187177 systemd[1]: sshd@5-172.31.25.143:22-139.178.89.65:37736.service: Deactivated successfully. Jan 13 20:37:40.192120 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:37:40.193761 systemd-logind[1885]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:37:40.195399 systemd-logind[1885]: Removed session 6. Jan 13 20:37:40.218637 systemd[1]: Started sshd@6-172.31.25.143:22-139.178.89.65:37740.service - OpenSSH per-connection server daemon (139.178.89.65:37740). Jan 13 20:37:40.438239 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 37740 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:40.440143 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:40.459029 systemd-logind[1885]: New session 7 of user core. Jan 13 20:37:40.472240 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:37:40.622131 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:37:40.622533 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:40.640681 sudo[2187]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:40.668066 sshd[2186]: Connection closed by 139.178.89.65 port 37740 Jan 13 20:37:40.669069 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:40.673361 systemd[1]: sshd@6-172.31.25.143:22-139.178.89.65:37740.service: Deactivated successfully. Jan 13 20:37:40.675487 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:37:40.676948 systemd-logind[1885]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:37:40.678542 systemd-logind[1885]: Removed session 7. Jan 13 20:37:40.705249 systemd[1]: Started sshd@7-172.31.25.143:22-139.178.89.65:37742.service - OpenSSH per-connection server daemon (139.178.89.65:37742). Jan 13 20:37:40.894501 sshd[2192]: Accepted publickey for core from 139.178.89.65 port 37742 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:40.898383 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:40.906914 systemd-logind[1885]: New session 8 of user core. Jan 13 20:37:40.917058 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:37:41.017529 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:37:41.018151 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:41.026077 sudo[2196]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:41.034002 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:37:41.034772 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:41.054370 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:37:41.144151 augenrules[2218]: No rules Jan 13 20:37:41.145986 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:37:41.146279 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:37:41.148713 sudo[2195]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:41.171366 sshd[2194]: Connection closed by 139.178.89.65 port 37742 Jan 13 20:37:41.173358 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:41.178440 systemd[1]: sshd@7-172.31.25.143:22-139.178.89.65:37742.service: Deactivated successfully. Jan 13 20:37:41.180995 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:37:41.183040 systemd-logind[1885]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:37:41.185086 systemd-logind[1885]: Removed session 8. Jan 13 20:37:41.208779 systemd[1]: Started sshd@8-172.31.25.143:22-139.178.89.65:49240.service - OpenSSH per-connection server daemon (139.178.89.65:49240). Jan 13 20:37:41.385840 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 49240 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:41.387334 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:41.393640 systemd-logind[1885]: New session 9 of user core. Jan 13 20:37:41.409195 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:37:41.508859 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:37:41.509445 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:41.831628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:37:41.839073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:42.162933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:42.174444 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:42.239607 kubelet[2251]: E0113 20:37:42.238882 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:42.244986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:42.245179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:43.060393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:43.072689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:43.112157 systemd[1]: Reloading requested from client PID 2284 ('systemctl') (unit session-9.scope)... Jan 13 20:37:43.112387 systemd[1]: Reloading... Jan 13 20:37:43.293821 zram_generator::config[2325]: No configuration found. Jan 13 20:37:43.466167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:43.578876 systemd[1]: Reloading finished in 465 ms. Jan 13 20:37:43.645583 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:37:43.645701 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:37:43.646034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:43.652243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:43.968220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:43.971054 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:37:44.068111 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:37:44.068111 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:37:44.068111 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:37:44.069091 kubelet[2385]: I0113 20:37:44.068190 2385 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:37:44.325845 kubelet[2385]: I0113 20:37:44.325612 2385 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:37:44.325845 kubelet[2385]: I0113 20:37:44.325644 2385 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:37:44.326002 kubelet[2385]: I0113 20:37:44.325938 2385 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:37:44.357618 kubelet[2385]: I0113 20:37:44.357573 2385 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:37:44.371860 kubelet[2385]: I0113 20:37:44.371829 2385 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:37:44.372188 kubelet[2385]: I0113 20:37:44.372162 2385 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:37:44.372387 kubelet[2385]: I0113 20:37:44.372360 2385 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:37:44.372534 kubelet[2385]: I0113 20:37:44.372402 2385 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:37:44.372534 kubelet[2385]: I0113 20:37:44.372418 2385 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:37:44.374908 kubelet[2385]: I0113 20:37:44.374775 2385 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:37:44.375079 kubelet[2385]: I0113 20:37:44.375054 2385 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:37:44.375220 kubelet[2385]: I0113 20:37:44.375088 2385 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:37:44.375220 kubelet[2385]: I0113 20:37:44.375210 2385 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:37:44.375310 kubelet[2385]: I0113 20:37:44.375236 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:37:44.377778 kubelet[2385]: E0113 20:37:44.377458 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:44.377778 kubelet[2385]: E0113 20:37:44.377512 2385 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:44.378447 kubelet[2385]: I0113 20:37:44.378419 2385 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:37:44.383457 kubelet[2385]: I0113 20:37:44.383412 2385 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:37:44.387030 kubelet[2385]: W0113 20:37:44.385793 2385 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:37:44.387030 kubelet[2385]: I0113 20:37:44.386618 2385 server.go:1256] "Started kubelet" Jan 13 20:37:44.387554 kubelet[2385]: I0113 20:37:44.387532 2385 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:37:44.388696 kubelet[2385]: I0113 20:37:44.388671 2385 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:37:44.389596 kubelet[2385]: I0113 20:37:44.389574 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:37:44.390545 kubelet[2385]: I0113 20:37:44.390515 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:37:44.390724 kubelet[2385]: I0113 20:37:44.390702 2385 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:37:44.391236 kubelet[2385]: W0113 20:37:44.391213 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.25.143" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:37:44.391316 kubelet[2385]: E0113 20:37:44.391245 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.143" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:37:44.391316 kubelet[2385]: W0113 20:37:44.391305 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:37:44.391316 kubelet[2385]: E0113 20:37:44.391320 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:37:44.403018 kubelet[2385]: I0113 20:37:44.400687 2385 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:37:44.403018 kubelet[2385]: I0113 20:37:44.402310 2385 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:37:44.403018 kubelet[2385]: I0113 20:37:44.402538 2385 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:37:44.410547 kubelet[2385]: I0113 20:37:44.410516 2385 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:37:44.411681 kubelet[2385]: I0113 20:37:44.411561 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:37:44.414998 kubelet[2385]: I0113 20:37:44.414977 2385 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:37:44.431195 kubelet[2385]: E0113 20:37:44.431167 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.25.143\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:37:44.433838 kubelet[2385]: W0113 20:37:44.431607 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:37:44.433838 kubelet[2385]: E0113 20:37:44.431838 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:37:44.434175 kubelet[2385]: E0113 20:37:44.434157 2385 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.143.181a5b00966cc484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.143,UID:172.31.25.143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.25.143,},FirstTimestamp:2025-01-13 20:37:44.386585732 +0000 UTC m=+0.408734600,LastTimestamp:2025-01-13 20:37:44.386585732 +0000 UTC m=+0.408734600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.143,}" Jan 13 20:37:44.436536 kubelet[2385]: I0113 20:37:44.436512 2385 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:37:44.436536 kubelet[2385]: I0113 20:37:44.436535 2385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:37:44.436792 kubelet[2385]: I0113 20:37:44.436725 2385 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:37:44.439656 kubelet[2385]: E0113 20:37:44.439630 2385 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.143.181a5b0099149b59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.143,UID:172.31.25.143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.25.143 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.25.143,},FirstTimestamp:2025-01-13 20:37:44.431139673 +0000 UTC m=+0.453288539,LastTimestamp:2025-01-13 20:37:44.431139673 +0000 UTC m=+0.453288539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.143,}" Jan 13 20:37:44.440998 kubelet[2385]: I0113 20:37:44.440881 2385 policy_none.go:49] "None policy: Start" Jan 13 20:37:44.442425 kubelet[2385]: I0113 20:37:44.442030 2385 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:37:44.442425 kubelet[2385]: I0113 20:37:44.442058 2385 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:37:44.447623 kubelet[2385]: E0113 20:37:44.447595 2385 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.143.181a5b009914ba93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.143,UID:172.31.25.143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.25.143 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.25.143,},FirstTimestamp:2025-01-13 20:37:44.431147667 +0000 UTC m=+0.453296513,LastTimestamp:2025-01-13 20:37:44.431147667 +0000 UTC m=+0.453296513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.143,}" Jan 13 20:37:44.452000 kubelet[2385]: E0113 20:37:44.451971 2385 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.143.181a5b009914ccc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.143,UID:172.31.25.143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.25.143 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.25.143,},FirstTimestamp:2025-01-13 20:37:44.431152324 +0000 UTC m=+0.453301171,LastTimestamp:2025-01-13 20:37:44.431152324 +0000 UTC m=+0.453301171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.143,}" Jan 13 20:37:44.460830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:37:44.476351 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:37:44.483654 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:37:44.492775 kubelet[2385]: I0113 20:37:44.492146 2385 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:37:44.492775 kubelet[2385]: I0113 20:37:44.492525 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:37:44.496688 kubelet[2385]: E0113 20:37:44.496658 2385 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.143\" not found" Jan 13 20:37:44.502974 kubelet[2385]: I0113 20:37:44.502916 2385 kubelet_node_status.go:73] "Attempting to register node" node="172.31.25.143" Jan 13 20:37:44.513873 kubelet[2385]: I0113 20:37:44.513696 2385 kubelet_node_status.go:76] "Successfully registered node" node="172.31.25.143" Jan 13 20:37:44.537307 kubelet[2385]: I0113 20:37:44.537274 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:37:44.539758 kubelet[2385]: I0113 20:37:44.539727 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:37:44.539988 kubelet[2385]: I0113 20:37:44.539772 2385 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:37:44.539988 kubelet[2385]: I0113 20:37:44.539792 2385 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:37:44.539988 kubelet[2385]: E0113 20:37:44.539860 2385 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:37:44.546618 kubelet[2385]: E0113 20:37:44.546588 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:44.647269 kubelet[2385]: E0113 20:37:44.647226 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:44.748048 kubelet[2385]: E0113 20:37:44.747996 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:44.848601 kubelet[2385]: E0113 20:37:44.848553 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:44.949630 kubelet[2385]: E0113 20:37:44.949512 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.050284 kubelet[2385]: E0113 20:37:45.050233 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.151108 kubelet[2385]: E0113 20:37:45.151054 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.251922 kubelet[2385]: E0113 20:37:45.251783 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.328540 kubelet[2385]: I0113 20:37:45.328488 2385 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:37:45.328716 kubelet[2385]: W0113 20:37:45.328675 2385 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:37:45.352714 kubelet[2385]: E0113 20:37:45.352667 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.378186 kubelet[2385]: E0113 20:37:45.378144 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:45.453553 kubelet[2385]: E0113 20:37:45.453506 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.540356 sudo[2229]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:45.553689 kubelet[2385]: E0113 20:37:45.553642 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.562849 sshd[2228]: Connection closed by 139.178.89.65 port 49240 Jan 13 20:37:45.563616 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:45.567640 systemd[1]: sshd@8-172.31.25.143:22-139.178.89.65:49240.service: Deactivated successfully. Jan 13 20:37:45.569958 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:37:45.572231 systemd-logind[1885]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:37:45.573744 systemd-logind[1885]: Removed session 9. Jan 13 20:37:45.654256 kubelet[2385]: E0113 20:37:45.654212 2385 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.143\" not found" Jan 13 20:37:45.755437 kubelet[2385]: I0113 20:37:45.755396 2385 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:37:45.756919 kubelet[2385]: I0113 20:37:45.756676 2385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:37:45.757321 containerd[1905]: time="2025-01-13T20:37:45.756422324Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:37:46.378036 kubelet[2385]: I0113 20:37:46.377979 2385 apiserver.go:52] "Watching apiserver" Jan 13 20:37:46.378480 kubelet[2385]: E0113 20:37:46.378317 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:46.385439 kubelet[2385]: I0113 20:37:46.385393 2385 topology_manager.go:215] "Topology Admit Handler" podUID="bd0cd6a3-a8c0-4923-b635-90a27e682f50" podNamespace="calico-system" podName="calico-node-rd6lw" Jan 13 20:37:46.385595 kubelet[2385]: I0113 20:37:46.385528 2385 topology_manager.go:215] "Topology Admit Handler" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" podNamespace="calico-system" podName="csi-node-driver-rdc6r" Jan 13 20:37:46.385649 kubelet[2385]: I0113 20:37:46.385598 2385 topology_manager.go:215] "Topology Admit Handler" podUID="44478454-5aca-4751-8cf4-86e481132f49" podNamespace="kube-system" podName="kube-proxy-hdwvn" Jan 13 20:37:46.386489 kubelet[2385]: E0113 20:37:46.386462 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:46.401019 systemd[1]: Created slice kubepods-besteffort-pod44478454_5aca_4751_8cf4_86e481132f49.slice - libcontainer container kubepods-besteffort-pod44478454_5aca_4751_8cf4_86e481132f49.slice. Jan 13 20:37:46.404698 kubelet[2385]: I0113 20:37:46.404659 2385 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:37:46.418307 kubelet[2385]: I0113 20:37:46.418066 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-policysync\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.418307 kubelet[2385]: I0113 20:37:46.418183 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd0cd6a3-a8c0-4923-b635-90a27e682f50-tigera-ca-bundle\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.418307 kubelet[2385]: I0113 20:37:46.418239 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd0cd6a3-a8c0-4923-b635-90a27e682f50-node-certs\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.418307 kubelet[2385]: I0113 20:37:46.418281 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebe7be58-4bc5-48be-801d-57bfd992d603-socket-dir\") pod \"csi-node-driver-rdc6r\" (UID: \"ebe7be58-4bc5-48be-801d-57bfd992d603\") " pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:46.423141 kubelet[2385]: I0113 20:37:46.422782 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebe7be58-4bc5-48be-801d-57bfd992d603-registration-dir\") pod \"csi-node-driver-rdc6r\" (UID: \"ebe7be58-4bc5-48be-801d-57bfd992d603\") " pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:46.423141 kubelet[2385]: I0113 20:37:46.422947 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44478454-5aca-4751-8cf4-86e481132f49-xtables-lock\") pod \"kube-proxy-hdwvn\" (UID: \"44478454-5aca-4751-8cf4-86e481132f49\") " pod="kube-system/kube-proxy-hdwvn" Jan 13 20:37:46.423141 kubelet[2385]: I0113 20:37:46.423013 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-cni-net-dir\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.423141 kubelet[2385]: I0113 20:37:46.423067 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-cni-log-dir\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.423141 kubelet[2385]: I0113 20:37:46.423106 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmt8\" (UniqueName: \"kubernetes.io/projected/44478454-5aca-4751-8cf4-86e481132f49-kube-api-access-zrmt8\") pod \"kube-proxy-hdwvn\" (UID: \"44478454-5aca-4751-8cf4-86e481132f49\") " pod="kube-system/kube-proxy-hdwvn" Jan 13 20:37:46.423844 kubelet[2385]: I0113 20:37:46.423486 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-lib-modules\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.423844 kubelet[2385]: I0113 20:37:46.423549 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-xtables-lock\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.423844 kubelet[2385]: I0113 20:37:46.423587 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-cni-bin-dir\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.423844 kubelet[2385]: I0113 20:37:46.423636 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-flexvol-driver-host\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.423844 kubelet[2385]: I0113 20:37:46.423669 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebe7be58-4bc5-48be-801d-57bfd992d603-varrun\") pod \"csi-node-driver-rdc6r\" (UID: \"ebe7be58-4bc5-48be-801d-57bfd992d603\") " pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:46.424066 kubelet[2385]: I0113 20:37:46.423732 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlzz\" (UniqueName: \"kubernetes.io/projected/ebe7be58-4bc5-48be-801d-57bfd992d603-kube-api-access-cxlzz\") pod \"csi-node-driver-rdc6r\" (UID: \"ebe7be58-4bc5-48be-801d-57bfd992d603\") " pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:46.424066 kubelet[2385]: I0113 20:37:46.423780 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-var-run-calico\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.424542 kubelet[2385]: I0113 20:37:46.424195 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd0cd6a3-a8c0-4923-b635-90a27e682f50-var-lib-calico\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.424542 kubelet[2385]: I0113 20:37:46.424261 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ts6x\" (UniqueName: \"kubernetes.io/projected/bd0cd6a3-a8c0-4923-b635-90a27e682f50-kube-api-access-7ts6x\") pod \"calico-node-rd6lw\" (UID: \"bd0cd6a3-a8c0-4923-b635-90a27e682f50\") " pod="calico-system/calico-node-rd6lw" Jan 13 20:37:46.424542 kubelet[2385]: I0113 20:37:46.424299 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebe7be58-4bc5-48be-801d-57bfd992d603-kubelet-dir\") pod \"csi-node-driver-rdc6r\" (UID: \"ebe7be58-4bc5-48be-801d-57bfd992d603\") " pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:46.424542 kubelet[2385]: I0113 20:37:46.424353 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44478454-5aca-4751-8cf4-86e481132f49-kube-proxy\") pod \"kube-proxy-hdwvn\" (UID: \"44478454-5aca-4751-8cf4-86e481132f49\") " pod="kube-system/kube-proxy-hdwvn" Jan 13 20:37:46.424542 kubelet[2385]: I0113 20:37:46.424397 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44478454-5aca-4751-8cf4-86e481132f49-lib-modules\") pod \"kube-proxy-hdwvn\" (UID: \"44478454-5aca-4751-8cf4-86e481132f49\") " pod="kube-system/kube-proxy-hdwvn" Jan 13 20:37:46.434116 systemd[1]: Created slice kubepods-besteffort-podbd0cd6a3_a8c0_4923_b635_90a27e682f50.slice - libcontainer container kubepods-besteffort-podbd0cd6a3_a8c0_4923_b635_90a27e682f50.slice. Jan 13 20:37:46.528443 kubelet[2385]: E0113 20:37:46.528325 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.528443 kubelet[2385]: W0113 20:37:46.528351 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.528443 kubelet[2385]: E0113 20:37:46.528376 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.529292 kubelet[2385]: E0113 20:37:46.528588 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.529292 kubelet[2385]: W0113 20:37:46.528599 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.529292 kubelet[2385]: E0113 20:37:46.528711 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.529292 kubelet[2385]: E0113 20:37:46.529181 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.529292 kubelet[2385]: W0113 20:37:46.529194 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.529292 kubelet[2385]: E0113 20:37:46.529295 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.529767 kubelet[2385]: E0113 20:37:46.529677 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.529767 kubelet[2385]: W0113 20:37:46.529691 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.529767 kubelet[2385]: E0113 20:37:46.529710 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.530406 kubelet[2385]: E0113 20:37:46.529982 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.530406 kubelet[2385]: W0113 20:37:46.529996 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.530406 kubelet[2385]: E0113 20:37:46.530015 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.530406 kubelet[2385]: E0113 20:37:46.530204 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.530406 kubelet[2385]: W0113 20:37:46.530213 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.530406 kubelet[2385]: E0113 20:37:46.530229 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.530816 kubelet[2385]: E0113 20:37:46.530469 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.530816 kubelet[2385]: W0113 20:37:46.530480 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.530816 kubelet[2385]: E0113 20:37:46.530496 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.530816 kubelet[2385]: E0113 20:37:46.530792 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.530998 kubelet[2385]: W0113 20:37:46.530825 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.530998 kubelet[2385]: E0113 20:37:46.530865 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.531567 kubelet[2385]: E0113 20:37:46.531103 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.531567 kubelet[2385]: W0113 20:37:46.531113 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.531567 kubelet[2385]: E0113 20:37:46.531148 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.531567 kubelet[2385]: E0113 20:37:46.531370 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.531567 kubelet[2385]: W0113 20:37:46.531438 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.531567 kubelet[2385]: E0113 20:37:46.531469 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.532058 kubelet[2385]: E0113 20:37:46.531738 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.532058 kubelet[2385]: W0113 20:37:46.531749 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.532058 kubelet[2385]: E0113 20:37:46.531787 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.533829 kubelet[2385]: E0113 20:37:46.532333 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.533829 kubelet[2385]: W0113 20:37:46.532347 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.533829 kubelet[2385]: E0113 20:37:46.532363 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.536551 kubelet[2385]: E0113 20:37:46.536531 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.536551 kubelet[2385]: W0113 20:37:46.536552 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.536681 kubelet[2385]: E0113 20:37:46.536589 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.554247 kubelet[2385]: E0113 20:37:46.554203 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.554247 kubelet[2385]: W0113 20:37:46.554245 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.554622 kubelet[2385]: E0113 20:37:46.554362 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.566187 kubelet[2385]: E0113 20:37:46.566156 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.566187 kubelet[2385]: W0113 20:37:46.566179 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.566365 kubelet[2385]: E0113 20:37:46.566210 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.576433 kubelet[2385]: E0113 20:37:46.576348 2385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:37:46.576433 kubelet[2385]: W0113 20:37:46.576368 2385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:37:46.576433 kubelet[2385]: E0113 20:37:46.576395 2385 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:37:46.729454 containerd[1905]: time="2025-01-13T20:37:46.729340396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdwvn,Uid:44478454-5aca-4751-8cf4-86e481132f49,Namespace:kube-system,Attempt:0,}" Jan 13 20:37:46.737145 containerd[1905]: time="2025-01-13T20:37:46.737100144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rd6lw,Uid:bd0cd6a3-a8c0-4923-b635-90a27e682f50,Namespace:calico-system,Attempt:0,}" Jan 13 20:37:47.302985 containerd[1905]: time="2025-01-13T20:37:47.302661164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:47.305052 containerd[1905]: time="2025-01-13T20:37:47.304998595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:37:47.306339 containerd[1905]: time="2025-01-13T20:37:47.306300633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:47.308672 containerd[1905]: time="2025-01-13T20:37:47.307509575Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:47.308672 containerd[1905]: time="2025-01-13T20:37:47.307886377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:37:47.309558 containerd[1905]: time="2025-01-13T20:37:47.309502119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:37:47.312259 containerd[1905]: time="2025-01-13T20:37:47.311536995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.059444ms" Jan 13 20:37:47.316201 containerd[1905]: time="2025-01-13T20:37:47.316154457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 578.952769ms" Jan 13 20:37:47.382739 kubelet[2385]: E0113 20:37:47.379746 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:47.545483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355634642.mount: Deactivated successfully. Jan 13 20:37:47.591518 containerd[1905]: time="2025-01-13T20:37:47.587080603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:47.591518 containerd[1905]: time="2025-01-13T20:37:47.591166187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:47.591518 containerd[1905]: time="2025-01-13T20:37:47.591207888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:47.591518 containerd[1905]: time="2025-01-13T20:37:47.591395966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:47.593675 containerd[1905]: time="2025-01-13T20:37:47.593147730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:37:47.593675 containerd[1905]: time="2025-01-13T20:37:47.593219323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:37:47.593675 containerd[1905]: time="2025-01-13T20:37:47.593243291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:47.604321 containerd[1905]: time="2025-01-13T20:37:47.594049751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:37:47.761164 systemd[1]: Started cri-containerd-ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45.scope - libcontainer container ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45. Jan 13 20:37:47.770567 systemd[1]: Started cri-containerd-d4b45d5d79942650a14e08995b8a773e00868a721f6f01f0a0586f2ba4c5afe9.scope - libcontainer container d4b45d5d79942650a14e08995b8a773e00868a721f6f01f0a0586f2ba4c5afe9. Jan 13 20:37:47.828748 containerd[1905]: time="2025-01-13T20:37:47.827988338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rd6lw,Uid:bd0cd6a3-a8c0-4923-b635-90a27e682f50,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\"" Jan 13 20:37:47.832233 containerd[1905]: time="2025-01-13T20:37:47.832098930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:37:47.834832 containerd[1905]: time="2025-01-13T20:37:47.834782330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdwvn,Uid:44478454-5aca-4751-8cf4-86e481132f49,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4b45d5d79942650a14e08995b8a773e00868a721f6f01f0a0586f2ba4c5afe9\"" Jan 13 20:37:48.380028 kubelet[2385]: E0113 20:37:48.379905 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:48.542372 kubelet[2385]: E0113 20:37:48.540901 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:49.034333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297577618.mount: Deactivated successfully. Jan 13 20:37:49.217473 containerd[1905]: time="2025-01-13T20:37:49.217421929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:49.218773 containerd[1905]: time="2025-01-13T20:37:49.218646804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 20:37:49.221136 containerd[1905]: time="2025-01-13T20:37:49.219881933Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:49.222908 containerd[1905]: time="2025-01-13T20:37:49.222834983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:49.225432 containerd[1905]: time="2025-01-13T20:37:49.223953445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.391808858s" Jan 13 20:37:49.225432 containerd[1905]: time="2025-01-13T20:37:49.223996446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:37:49.226247 containerd[1905]: time="2025-01-13T20:37:49.226198757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:37:49.228739 containerd[1905]: time="2025-01-13T20:37:49.228687136Z" level=info msg="CreateContainer within sandbox \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:37:49.253130 containerd[1905]: time="2025-01-13T20:37:49.253085602Z" level=info msg="CreateContainer within sandbox \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586\"" Jan 13 20:37:49.253813 containerd[1905]: time="2025-01-13T20:37:49.253769463Z" level=info msg="StartContainer for \"a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586\"" Jan 13 20:37:49.303104 systemd[1]: Started cri-containerd-a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586.scope - libcontainer container a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586. Jan 13 20:37:49.352763 containerd[1905]: time="2025-01-13T20:37:49.352304031Z" level=info msg="StartContainer for \"a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586\" returns successfully" Jan 13 20:37:49.367567 systemd[1]: cri-containerd-a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586.scope: Deactivated successfully. Jan 13 20:37:49.381819 kubelet[2385]: E0113 20:37:49.380484 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:49.430403 containerd[1905]: time="2025-01-13T20:37:49.430322822Z" level=info msg="shim disconnected" id=a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586 namespace=k8s.io Jan 13 20:37:49.430403 containerd[1905]: time="2025-01-13T20:37:49.430385882Z" level=warning msg="cleaning up after shim disconnected" id=a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586 namespace=k8s.io Jan 13 20:37:49.430403 containerd[1905]: time="2025-01-13T20:37:49.430398644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:37:49.989498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7cad3348ec97fc61defb0ba0f969d193cebc826a26fd02082aa4f330a1c0586-rootfs.mount: Deactivated successfully. Jan 13 20:37:50.381371 kubelet[2385]: E0113 20:37:50.381315 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:50.541087 kubelet[2385]: E0113 20:37:50.540596 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:50.805257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361231918.mount: Deactivated successfully. Jan 13 20:37:51.367071 containerd[1905]: time="2025-01-13T20:37:51.367019896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:51.368374 containerd[1905]: time="2025-01-13T20:37:51.368126310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:37:51.370103 containerd[1905]: time="2025-01-13T20:37:51.369512992Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:51.372545 containerd[1905]: time="2025-01-13T20:37:51.372480186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:51.373702 containerd[1905]: time="2025-01-13T20:37:51.373132161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.146871106s" Jan 13 20:37:51.373702 containerd[1905]: time="2025-01-13T20:37:51.373171000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:37:51.374606 containerd[1905]: time="2025-01-13T20:37:51.374544761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:37:51.375500 containerd[1905]: time="2025-01-13T20:37:51.375470235Z" level=info msg="CreateContainer within sandbox \"d4b45d5d79942650a14e08995b8a773e00868a721f6f01f0a0586f2ba4c5afe9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:37:51.381831 kubelet[2385]: E0113 20:37:51.381646 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:51.396038 containerd[1905]: time="2025-01-13T20:37:51.395990237Z" level=info msg="CreateContainer within sandbox \"d4b45d5d79942650a14e08995b8a773e00868a721f6f01f0a0586f2ba4c5afe9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8c6e130952ec72b89b3441a3e01e5f8e3fdc98b4242f23435287e7c6732c247\"" Jan 13 20:37:51.396701 containerd[1905]: time="2025-01-13T20:37:51.396668313Z" level=info msg="StartContainer for \"f8c6e130952ec72b89b3441a3e01e5f8e3fdc98b4242f23435287e7c6732c247\"" Jan 13 20:37:51.462096 systemd[1]: Started cri-containerd-f8c6e130952ec72b89b3441a3e01e5f8e3fdc98b4242f23435287e7c6732c247.scope - libcontainer container f8c6e130952ec72b89b3441a3e01e5f8e3fdc98b4242f23435287e7c6732c247. Jan 13 20:37:51.497353 containerd[1905]: time="2025-01-13T20:37:51.497195285Z" level=info msg="StartContainer for \"f8c6e130952ec72b89b3441a3e01e5f8e3fdc98b4242f23435287e7c6732c247\" returns successfully" Jan 13 20:37:52.382555 kubelet[2385]: E0113 20:37:52.382504 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:52.543825 kubelet[2385]: E0113 20:37:52.543165 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:53.382836 kubelet[2385]: E0113 20:37:53.382690 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:54.383388 kubelet[2385]: E0113 20:37:54.383335 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:54.544506 kubelet[2385]: E0113 20:37:54.543933 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:55.383815 kubelet[2385]: E0113 20:37:55.383741 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:55.561302 containerd[1905]: time="2025-01-13T20:37:55.561251243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:55.562528 containerd[1905]: time="2025-01-13T20:37:55.562404428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:37:55.563897 containerd[1905]: time="2025-01-13T20:37:55.563627804Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:55.565987 containerd[1905]: time="2025-01-13T20:37:55.565951808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:55.566735 containerd[1905]: time="2025-01-13T20:37:55.566704752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.192121971s" Jan 13 20:37:55.566877 containerd[1905]: time="2025-01-13T20:37:55.566857382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:37:55.568887 containerd[1905]: time="2025-01-13T20:37:55.568859044Z" level=info msg="CreateContainer within sandbox \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:37:55.583500 containerd[1905]: time="2025-01-13T20:37:55.583454615Z" level=info msg="CreateContainer within sandbox \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3\"" Jan 13 20:37:55.584846 containerd[1905]: time="2025-01-13T20:37:55.584039127Z" level=info msg="StartContainer for \"c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3\"" Jan 13 20:37:55.622615 systemd[1]: run-containerd-runc-k8s.io-c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3-runc.fjtjgZ.mount: Deactivated successfully. Jan 13 20:37:55.630010 systemd[1]: Started cri-containerd-c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3.scope - libcontainer container c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3. Jan 13 20:37:55.667345 containerd[1905]: time="2025-01-13T20:37:55.667199571Z" level=info msg="StartContainer for \"c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3\" returns successfully" Jan 13 20:37:56.243998 containerd[1905]: time="2025-01-13T20:37:56.243776424Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:37:56.254275 systemd[1]: cri-containerd-c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3.scope: Deactivated successfully. Jan 13 20:37:56.286584 kubelet[2385]: I0113 20:37:56.286455 2385 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:37:56.296690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3-rootfs.mount: Deactivated successfully. Jan 13 20:37:56.385025 kubelet[2385]: E0113 20:37:56.384960 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:56.550062 systemd[1]: Created slice kubepods-besteffort-podebe7be58_4bc5_48be_801d_57bfd992d603.slice - libcontainer container kubepods-besteffort-podebe7be58_4bc5_48be_801d_57bfd992d603.slice. Jan 13 20:37:56.559042 containerd[1905]: time="2025-01-13T20:37:56.558001513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:0,}" Jan 13 20:37:56.628245 kubelet[2385]: I0113 20:37:56.627852 2385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hdwvn" podStartSLOduration=9.090694617 podStartE2EDuration="12.627779415s" podCreationTimestamp="2025-01-13 20:37:44 +0000 UTC" firstStartedPulling="2025-01-13 20:37:47.836539399 +0000 UTC m=+3.858688257" lastFinishedPulling="2025-01-13 20:37:51.3736242 +0000 UTC m=+7.395773055" observedRunningTime="2025-01-13 20:37:51.591734926 +0000 UTC m=+7.613883791" watchObservedRunningTime="2025-01-13 20:37:56.627779415 +0000 UTC m=+12.649928282" Jan 13 20:37:56.838073 containerd[1905]: time="2025-01-13T20:37:56.837967473Z" level=info msg="shim disconnected" id=c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3 namespace=k8s.io Jan 13 20:37:56.838073 containerd[1905]: time="2025-01-13T20:37:56.838049263Z" level=warning msg="cleaning up after shim disconnected" id=c7cd8378160bf4213f5d1873c54786b19df3fa5846af28a29471be01beaa76a3 namespace=k8s.io Jan 13 20:37:56.838073 containerd[1905]: time="2025-01-13T20:37:56.838061879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:37:56.939196 containerd[1905]: time="2025-01-13T20:37:56.939145700Z" level=error msg="Failed to destroy network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:56.940985 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51-shm.mount: Deactivated successfully. Jan 13 20:37:56.941632 containerd[1905]: time="2025-01-13T20:37:56.941592236Z" level=error msg="encountered an error cleaning up failed sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:56.942083 containerd[1905]: time="2025-01-13T20:37:56.941680327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:56.942628 kubelet[2385]: E0113 20:37:56.942189 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:56.942628 kubelet[2385]: E0113 20:37:56.942271 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:56.942628 kubelet[2385]: E0113 20:37:56.942300 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:56.942829 kubelet[2385]: E0113 20:37:56.942379 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:57.138916 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:37:57.385738 kubelet[2385]: E0113 20:37:57.385677 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:57.584484 kubelet[2385]: I0113 20:37:57.584450 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51" Jan 13 20:37:57.585200 containerd[1905]: time="2025-01-13T20:37:57.585076183Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:37:57.587947 containerd[1905]: time="2025-01-13T20:37:57.587902566Z" level=info msg="Ensure that sandbox 074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51 in task-service has been cleanup successfully" Jan 13 20:37:57.593717 containerd[1905]: time="2025-01-13T20:37:57.593269080Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:37:57.593717 containerd[1905]: time="2025-01-13T20:37:57.593606641Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:37:57.594482 systemd[1]: run-netns-cni\x2d45cbc7c0\x2de817\x2d1322\x2d20fe\x2dfd876f29e6f8.mount: Deactivated successfully. Jan 13 20:37:57.597751 containerd[1905]: time="2025-01-13T20:37:57.596820019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:1,}" Jan 13 20:37:57.608216 containerd[1905]: time="2025-01-13T20:37:57.608179401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:37:57.696664 containerd[1905]: time="2025-01-13T20:37:57.696608351Z" level=error msg="Failed to destroy network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:57.697479 containerd[1905]: time="2025-01-13T20:37:57.697028520Z" level=error msg="encountered an error cleaning up failed sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:57.697479 containerd[1905]: time="2025-01-13T20:37:57.697154320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:57.697589 kubelet[2385]: E0113 20:37:57.697483 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:57.697589 kubelet[2385]: E0113 20:37:57.697546 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:57.697589 kubelet[2385]: E0113 20:37:57.697580 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:57.697781 kubelet[2385]: E0113 20:37:57.697682 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:57.850086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21-shm.mount: Deactivated successfully. Jan 13 20:37:58.386398 kubelet[2385]: E0113 20:37:58.386340 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:58.610867 kubelet[2385]: I0113 20:37:58.610697 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21" Jan 13 20:37:58.617858 containerd[1905]: time="2025-01-13T20:37:58.612551075Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:37:58.617858 containerd[1905]: time="2025-01-13T20:37:58.613174135Z" level=info msg="Ensure that sandbox 79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21 in task-service has been cleanup successfully" Jan 13 20:37:58.623194 containerd[1905]: time="2025-01-13T20:37:58.622362563Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:37:58.623194 containerd[1905]: time="2025-01-13T20:37:58.622410433Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:37:58.625725 containerd[1905]: time="2025-01-13T20:37:58.623526798Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:37:58.625725 containerd[1905]: time="2025-01-13T20:37:58.623637592Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:37:58.625725 containerd[1905]: time="2025-01-13T20:37:58.623714409Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:37:58.623920 systemd[1]: run-netns-cni\x2d4b1d5dc8\x2dff8c\x2dbb81\x2d9791\x2de2507f4e388c.mount: Deactivated successfully. Jan 13 20:37:58.632139 containerd[1905]: time="2025-01-13T20:37:58.628733532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:2,}" Jan 13 20:37:58.808731 containerd[1905]: time="2025-01-13T20:37:58.808588521Z" level=error msg="Failed to destroy network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:58.812284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb-shm.mount: Deactivated successfully. Jan 13 20:37:58.813288 containerd[1905]: time="2025-01-13T20:37:58.813239548Z" level=error msg="encountered an error cleaning up failed sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:58.813388 containerd[1905]: time="2025-01-13T20:37:58.813320788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:58.814026 kubelet[2385]: E0113 20:37:58.813572 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:58.814026 kubelet[2385]: E0113 20:37:58.813639 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:58.814026 kubelet[2385]: E0113 20:37:58.813673 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:58.814327 kubelet[2385]: E0113 20:37:58.813739 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:37:59.387556 kubelet[2385]: E0113 20:37:59.387496 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:37:59.621920 kubelet[2385]: I0113 20:37:59.619516 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb" Jan 13 20:37:59.622143 containerd[1905]: time="2025-01-13T20:37:59.620301646Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:37:59.622143 containerd[1905]: time="2025-01-13T20:37:59.620533022Z" level=info msg="Ensure that sandbox 9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb in task-service has been cleanup successfully" Jan 13 20:37:59.625311 containerd[1905]: time="2025-01-13T20:37:59.625270187Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:37:59.625311 containerd[1905]: time="2025-01-13T20:37:59.625308300Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:37:59.625704 containerd[1905]: time="2025-01-13T20:37:59.625669668Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:37:59.625869 containerd[1905]: time="2025-01-13T20:37:59.625773190Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:37:59.625869 containerd[1905]: time="2025-01-13T20:37:59.625788301Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:37:59.626399 systemd[1]: run-netns-cni\x2d3a45e73e\x2dec2f\x2d8d81\x2d489b\x2d26d790255438.mount: Deactivated successfully. Jan 13 20:37:59.629371 containerd[1905]: time="2025-01-13T20:37:59.629165525Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:37:59.629371 containerd[1905]: time="2025-01-13T20:37:59.629282120Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:37:59.629371 containerd[1905]: time="2025-01-13T20:37:59.629297730Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:37:59.630357 containerd[1905]: time="2025-01-13T20:37:59.630012002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:3,}" Jan 13 20:37:59.730858 containerd[1905]: time="2025-01-13T20:37:59.728864662Z" level=error msg="Failed to destroy network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:59.731453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b-shm.mount: Deactivated successfully. Jan 13 20:37:59.733970 containerd[1905]: time="2025-01-13T20:37:59.732491609Z" level=error msg="encountered an error cleaning up failed sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:59.733970 containerd[1905]: time="2025-01-13T20:37:59.732577335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:59.734101 kubelet[2385]: E0113 20:37:59.732936 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:37:59.734101 kubelet[2385]: E0113 20:37:59.732995 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:59.734101 kubelet[2385]: E0113 20:37:59.733030 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:37:59.734254 kubelet[2385]: E0113 20:37:59.733091 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:00.234749 kubelet[2385]: I0113 20:38:00.233925 2385 topology_manager.go:215] "Topology Admit Handler" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" podNamespace="default" podName="nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:00.238241 kubelet[2385]: I0113 20:38:00.238205 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfjxs\" (UniqueName: \"kubernetes.io/projected/6504c9c9-bb3a-4e46-ac94-ffc964a9dc32-kube-api-access-zfjxs\") pod \"nginx-deployment-6d5f899847-9kjt9\" (UID: \"6504c9c9-bb3a-4e46-ac94-ffc964a9dc32\") " pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:00.271620 systemd[1]: Created slice kubepods-besteffort-pod6504c9c9_bb3a_4e46_ac94_ffc964a9dc32.slice - libcontainer container kubepods-besteffort-pod6504c9c9_bb3a_4e46_ac94_ffc964a9dc32.slice. Jan 13 20:38:00.388382 kubelet[2385]: E0113 20:38:00.388226 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:00.578431 containerd[1905]: time="2025-01-13T20:38:00.578274724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:0,}" Jan 13 20:38:00.628000 kubelet[2385]: I0113 20:38:00.627219 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b" Jan 13 20:38:00.628543 containerd[1905]: time="2025-01-13T20:38:00.628502111Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:00.629847 containerd[1905]: time="2025-01-13T20:38:00.629819467Z" level=info msg="Ensure that sandbox b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b in task-service has been cleanup successfully" Jan 13 20:38:00.630165 containerd[1905]: time="2025-01-13T20:38:00.630147145Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:00.630262 containerd[1905]: time="2025-01-13T20:38:00.630245687Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:00.630994 containerd[1905]: time="2025-01-13T20:38:00.630785682Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:00.631213 containerd[1905]: time="2025-01-13T20:38:00.631195461Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:00.631475 containerd[1905]: time="2025-01-13T20:38:00.631350896Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:00.632363 containerd[1905]: time="2025-01-13T20:38:00.631989823Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:00.632363 containerd[1905]: time="2025-01-13T20:38:00.632157867Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:00.632363 containerd[1905]: time="2025-01-13T20:38:00.632174039Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:00.634310 containerd[1905]: time="2025-01-13T20:38:00.634288800Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:00.634542 containerd[1905]: time="2025-01-13T20:38:00.634497686Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:00.634542 containerd[1905]: time="2025-01-13T20:38:00.634515958Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:00.636001 containerd[1905]: time="2025-01-13T20:38:00.635858114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:4,}" Jan 13 20:38:00.638983 systemd[1]: run-netns-cni\x2d7857ac39\x2dd218\x2dd7e0\x2d5ae2\x2d8bb4e9c764c9.mount: Deactivated successfully. Jan 13 20:38:00.783830 containerd[1905]: time="2025-01-13T20:38:00.783530081Z" level=error msg="Failed to destroy network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.784207 containerd[1905]: time="2025-01-13T20:38:00.784168184Z" level=error msg="encountered an error cleaning up failed sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.784409 containerd[1905]: time="2025-01-13T20:38:00.784353928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.785235 kubelet[2385]: E0113 20:38:00.784777 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.785235 kubelet[2385]: E0113 20:38:00.784865 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:00.785235 kubelet[2385]: E0113 20:38:00.784894 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:00.785418 kubelet[2385]: E0113 20:38:00.784958 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:00.790319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090-shm.mount: Deactivated successfully. Jan 13 20:38:00.823743 containerd[1905]: time="2025-01-13T20:38:00.823563260Z" level=error msg="Failed to destroy network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.824335 containerd[1905]: time="2025-01-13T20:38:00.824294746Z" level=error msg="encountered an error cleaning up failed sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.825327 containerd[1905]: time="2025-01-13T20:38:00.824527764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.825442 kubelet[2385]: E0113 20:38:00.824905 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:00.825442 kubelet[2385]: E0113 20:38:00.824975 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:00.825442 kubelet[2385]: E0113 20:38:00.825007 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:00.825585 kubelet[2385]: E0113 20:38:00.825071 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:01.389545 kubelet[2385]: E0113 20:38:01.389419 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:01.628145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7-shm.mount: Deactivated successfully. Jan 13 20:38:01.648877 kubelet[2385]: I0113 20:38:01.647530 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090" Jan 13 20:38:01.649476 containerd[1905]: time="2025-01-13T20:38:01.649207282Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:01.652814 containerd[1905]: time="2025-01-13T20:38:01.652002630Z" level=info msg="Ensure that sandbox 1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090 in task-service has been cleanup successfully" Jan 13 20:38:01.652814 containerd[1905]: time="2025-01-13T20:38:01.652637697Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:01.652814 containerd[1905]: time="2025-01-13T20:38:01.652664275Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:01.660225 systemd[1]: run-netns-cni\x2de44e8b3a\x2d8956\x2d8240\x2df0fa\x2d3e3ed038a540.mount: Deactivated successfully. Jan 13 20:38:01.661433 containerd[1905]: time="2025-01-13T20:38:01.661019531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:1,}" Jan 13 20:38:01.671106 kubelet[2385]: I0113 20:38:01.671053 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7" Jan 13 20:38:01.672262 containerd[1905]: time="2025-01-13T20:38:01.671990465Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:01.672481 containerd[1905]: time="2025-01-13T20:38:01.672248165Z" level=info msg="Ensure that sandbox 96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7 in task-service has been cleanup successfully" Jan 13 20:38:01.677678 systemd[1]: run-netns-cni\x2dab177e15\x2dd86a\x2df6e9\x2d92a3\x2dca17a13ee9c4.mount: Deactivated successfully. Jan 13 20:38:01.678213 containerd[1905]: time="2025-01-13T20:38:01.676057025Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:01.678213 containerd[1905]: time="2025-01-13T20:38:01.678034457Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:01.681336 containerd[1905]: time="2025-01-13T20:38:01.680620769Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:01.681336 containerd[1905]: time="2025-01-13T20:38:01.680779906Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:01.681336 containerd[1905]: time="2025-01-13T20:38:01.680920042Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:01.682904 containerd[1905]: time="2025-01-13T20:38:01.682215776Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:01.682904 containerd[1905]: time="2025-01-13T20:38:01.682347983Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:01.682904 containerd[1905]: time="2025-01-13T20:38:01.682362781Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:01.683428 containerd[1905]: time="2025-01-13T20:38:01.683128019Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:01.683428 containerd[1905]: time="2025-01-13T20:38:01.683397559Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:01.683428 containerd[1905]: time="2025-01-13T20:38:01.683418467Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:01.683994 containerd[1905]: time="2025-01-13T20:38:01.683967146Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:01.684088 containerd[1905]: time="2025-01-13T20:38:01.684070897Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:01.684145 containerd[1905]: time="2025-01-13T20:38:01.684090965Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:01.685846 containerd[1905]: time="2025-01-13T20:38:01.685559720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:5,}" Jan 13 20:38:02.023263 containerd[1905]: time="2025-01-13T20:38:02.023139337Z" level=error msg="Failed to destroy network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.030523 containerd[1905]: time="2025-01-13T20:38:02.030445691Z" level=error msg="encountered an error cleaning up failed sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.032010 containerd[1905]: time="2025-01-13T20:38:02.031918081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.036514 kubelet[2385]: E0113 20:38:02.034746 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.036514 kubelet[2385]: E0113 20:38:02.034844 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:02.036514 kubelet[2385]: E0113 20:38:02.034868 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:02.036900 kubelet[2385]: E0113 20:38:02.034998 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:02.048246 containerd[1905]: time="2025-01-13T20:38:02.047376168Z" level=error msg="Failed to destroy network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.048767 containerd[1905]: time="2025-01-13T20:38:02.048723091Z" level=error msg="encountered an error cleaning up failed sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.049029 containerd[1905]: time="2025-01-13T20:38:02.048995490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.049731 kubelet[2385]: E0113 20:38:02.049704 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:02.049864 kubelet[2385]: E0113 20:38:02.049765 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:02.050051 kubelet[2385]: E0113 20:38:02.049988 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:02.050138 kubelet[2385]: E0113 20:38:02.050078 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:02.389820 kubelet[2385]: E0113 20:38:02.389736 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:02.627340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903-shm.mount: Deactivated successfully. Jan 13 20:38:02.681917 kubelet[2385]: I0113 20:38:02.680129 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903" Jan 13 20:38:02.682024 containerd[1905]: time="2025-01-13T20:38:02.681725624Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:02.689833 containerd[1905]: time="2025-01-13T20:38:02.689651263Z" level=info msg="Ensure that sandbox 5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903 in task-service has been cleanup successfully" Jan 13 20:38:02.690598 containerd[1905]: time="2025-01-13T20:38:02.690376672Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:02.690598 containerd[1905]: time="2025-01-13T20:38:02.690403162Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:02.693606 containerd[1905]: time="2025-01-13T20:38:02.693507800Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:02.693721 containerd[1905]: time="2025-01-13T20:38:02.693685657Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:02.693721 containerd[1905]: time="2025-01-13T20:38:02.693700873Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:02.695362 systemd[1]: run-netns-cni\x2df8ba3ad5\x2d5e40\x2d433a\x2dfd7c\x2d500f8c644f04.mount: Deactivated successfully. Jan 13 20:38:02.699086 containerd[1905]: time="2025-01-13T20:38:02.698776730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:2,}" Jan 13 20:38:02.705506 kubelet[2385]: I0113 20:38:02.705195 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82" Jan 13 20:38:02.706729 containerd[1905]: time="2025-01-13T20:38:02.706695015Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:02.707400 containerd[1905]: time="2025-01-13T20:38:02.707295638Z" level=info msg="Ensure that sandbox a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82 in task-service has been cleanup successfully" Jan 13 20:38:02.709878 containerd[1905]: time="2025-01-13T20:38:02.707811922Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:02.709878 containerd[1905]: time="2025-01-13T20:38:02.707835995Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:02.709878 containerd[1905]: time="2025-01-13T20:38:02.708469206Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:02.709878 containerd[1905]: time="2025-01-13T20:38:02.708569171Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:02.709878 containerd[1905]: time="2025-01-13T20:38:02.708584298Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:02.712399 systemd[1]: run-netns-cni\x2d506a90c5\x2db6ee\x2d5f5f\x2d0122\x2db7e284867b2b.mount: Deactivated successfully. Jan 13 20:38:02.715231 containerd[1905]: time="2025-01-13T20:38:02.713713746Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:02.715231 containerd[1905]: time="2025-01-13T20:38:02.714054563Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:02.715231 containerd[1905]: time="2025-01-13T20:38:02.714075750Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:02.719276 containerd[1905]: time="2025-01-13T20:38:02.719236386Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:02.719395 containerd[1905]: time="2025-01-13T20:38:02.719358872Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:02.719395 containerd[1905]: time="2025-01-13T20:38:02.719374779Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:02.721816 containerd[1905]: time="2025-01-13T20:38:02.721755241Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:02.722184 containerd[1905]: time="2025-01-13T20:38:02.722103471Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:02.722353 containerd[1905]: time="2025-01-13T20:38:02.722184984Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:02.724424 containerd[1905]: time="2025-01-13T20:38:02.723622262Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:02.724424 containerd[1905]: time="2025-01-13T20:38:02.723726689Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:02.724424 containerd[1905]: time="2025-01-13T20:38:02.723740857Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:02.727716 containerd[1905]: time="2025-01-13T20:38:02.727600496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:6,}" Jan 13 20:38:03.066290 containerd[1905]: time="2025-01-13T20:38:03.065388103Z" level=error msg="Failed to destroy network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.067580 containerd[1905]: time="2025-01-13T20:38:03.067440712Z" level=error msg="encountered an error cleaning up failed sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.067854 containerd[1905]: time="2025-01-13T20:38:03.067609486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.070023 kubelet[2385]: E0113 20:38:03.068453 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.070023 kubelet[2385]: E0113 20:38:03.068581 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:03.070023 kubelet[2385]: E0113 20:38:03.068981 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:03.070233 kubelet[2385]: E0113 20:38:03.069117 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:03.074241 containerd[1905]: time="2025-01-13T20:38:03.074099951Z" level=error msg="Failed to destroy network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.074667 containerd[1905]: time="2025-01-13T20:38:03.074622918Z" level=error msg="encountered an error cleaning up failed sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.074757 containerd[1905]: time="2025-01-13T20:38:03.074697522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.075135 kubelet[2385]: E0113 20:38:03.075112 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:03.075944 kubelet[2385]: E0113 20:38:03.075809 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:03.075944 kubelet[2385]: E0113 20:38:03.075853 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:03.076568 kubelet[2385]: E0113 20:38:03.076347 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:03.391217 kubelet[2385]: E0113 20:38:03.391166 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:03.624460 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068-shm.mount: Deactivated successfully. Jan 13 20:38:03.713052 kubelet[2385]: I0113 20:38:03.712950 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a" Jan 13 20:38:03.714443 containerd[1905]: time="2025-01-13T20:38:03.714225689Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:03.715495 containerd[1905]: time="2025-01-13T20:38:03.715259385Z" level=info msg="Ensure that sandbox d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a in task-service has been cleanup successfully" Jan 13 20:38:03.715714 containerd[1905]: time="2025-01-13T20:38:03.715662545Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:03.715885 containerd[1905]: time="2025-01-13T20:38:03.715840192Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:03.720604 containerd[1905]: time="2025-01-13T20:38:03.720567787Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:03.721073 containerd[1905]: time="2025-01-13T20:38:03.720863630Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:03.721176 systemd[1]: run-netns-cni\x2d4117e402\x2d959f\x2dd8ad\x2d59f2\x2dd6b5d0bc0a51.mount: Deactivated successfully. Jan 13 20:38:03.722107 containerd[1905]: time="2025-01-13T20:38:03.721689068Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:03.725122 containerd[1905]: time="2025-01-13T20:38:03.725086538Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:03.725234 containerd[1905]: time="2025-01-13T20:38:03.725207642Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:03.725234 containerd[1905]: time="2025-01-13T20:38:03.725223394Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:03.726217 containerd[1905]: time="2025-01-13T20:38:03.726064006Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:03.726217 containerd[1905]: time="2025-01-13T20:38:03.726164413Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:03.726217 containerd[1905]: time="2025-01-13T20:38:03.726180482Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:03.727995 kubelet[2385]: I0113 20:38:03.727094 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068" Jan 13 20:38:03.728454 containerd[1905]: time="2025-01-13T20:38:03.728427167Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:03.728656 containerd[1905]: time="2025-01-13T20:38:03.728634117Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:03.728711 containerd[1905]: time="2025-01-13T20:38:03.728656793Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:03.728975 containerd[1905]: time="2025-01-13T20:38:03.728727200Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:03.729504 containerd[1905]: time="2025-01-13T20:38:03.729220276Z" level=info msg="Ensure that sandbox 2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068 in task-service has been cleanup successfully" Jan 13 20:38:03.732284 containerd[1905]: time="2025-01-13T20:38:03.732246876Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:03.732284 containerd[1905]: time="2025-01-13T20:38:03.732283899Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:03.734843 systemd[1]: run-netns-cni\x2d4c285bc1\x2d7625\x2df120\x2d5f7e\x2dc2c96cafe065.mount: Deactivated successfully. Jan 13 20:38:03.735504 containerd[1905]: time="2025-01-13T20:38:03.735182185Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:03.735504 containerd[1905]: time="2025-01-13T20:38:03.735295527Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:03.735504 containerd[1905]: time="2025-01-13T20:38:03.735313610Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:03.735504 containerd[1905]: time="2025-01-13T20:38:03.735426127Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:03.735504 containerd[1905]: time="2025-01-13T20:38:03.735495967Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:03.735995 containerd[1905]: time="2025-01-13T20:38:03.735508434Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:03.737793 containerd[1905]: time="2025-01-13T20:38:03.737383520Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:03.737793 containerd[1905]: time="2025-01-13T20:38:03.737497596Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:03.737793 containerd[1905]: time="2025-01-13T20:38:03.737511908Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:03.737793 containerd[1905]: time="2025-01-13T20:38:03.737613385Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:03.737793 containerd[1905]: time="2025-01-13T20:38:03.737684790Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:03.737793 containerd[1905]: time="2025-01-13T20:38:03.737697235Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:03.740779 containerd[1905]: time="2025-01-13T20:38:03.739179966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:3,}" Jan 13 20:38:03.740779 containerd[1905]: time="2025-01-13T20:38:03.740140937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:7,}" Jan 13 20:38:04.057393 containerd[1905]: time="2025-01-13T20:38:04.057092213Z" level=error msg="Failed to destroy network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.057666 containerd[1905]: time="2025-01-13T20:38:04.057629020Z" level=error msg="encountered an error cleaning up failed sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.057750 containerd[1905]: time="2025-01-13T20:38:04.057709128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.058048 kubelet[2385]: E0113 20:38:04.058023 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.058127 containerd[1905]: time="2025-01-13T20:38:04.058037796Z" level=error msg="Failed to destroy network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.058457 containerd[1905]: time="2025-01-13T20:38:04.058408182Z" level=error msg="encountered an error cleaning up failed sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.058521 containerd[1905]: time="2025-01-13T20:38:04.058501486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.059377 kubelet[2385]: E0113 20:38:04.059305 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:04.059377 kubelet[2385]: E0113 20:38:04.059351 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:04.059484 kubelet[2385]: E0113 20:38:04.059417 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:04.059627 kubelet[2385]: E0113 20:38:04.058789 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:04.059686 kubelet[2385]: E0113 20:38:04.059661 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:04.059733 kubelet[2385]: E0113 20:38:04.059698 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:04.059777 kubelet[2385]: E0113 20:38:04.059751 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:04.375881 kubelet[2385]: E0113 20:38:04.375816 2385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:04.392295 kubelet[2385]: E0113 20:38:04.392228 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:04.625132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7-shm.mount: Deactivated successfully. Jan 13 20:38:04.625263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869-shm.mount: Deactivated successfully. Jan 13 20:38:04.740300 kubelet[2385]: I0113 20:38:04.739182 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7" Jan 13 20:38:04.740634 containerd[1905]: time="2025-01-13T20:38:04.740598323Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:04.742451 containerd[1905]: time="2025-01-13T20:38:04.742126189Z" level=info msg="Ensure that sandbox fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7 in task-service has been cleanup successfully" Jan 13 20:38:04.746336 containerd[1905]: time="2025-01-13T20:38:04.745521695Z" level=info msg="TearDown network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" successfully" Jan 13 20:38:04.746336 containerd[1905]: time="2025-01-13T20:38:04.745555602Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" returns successfully" Jan 13 20:38:04.746487 containerd[1905]: time="2025-01-13T20:38:04.746455352Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:04.747874 containerd[1905]: time="2025-01-13T20:38:04.746559422Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:04.747874 containerd[1905]: time="2025-01-13T20:38:04.746621077Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:04.746688 systemd[1]: run-netns-cni\x2de888eb59\x2d7fb4\x2d4da2\x2d416f\x2db25152498f7c.mount: Deactivated successfully. Jan 13 20:38:04.749532 containerd[1905]: time="2025-01-13T20:38:04.748964505Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:04.749532 containerd[1905]: time="2025-01-13T20:38:04.749175820Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:04.749532 containerd[1905]: time="2025-01-13T20:38:04.749193854Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:04.749737 containerd[1905]: time="2025-01-13T20:38:04.749621779Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:04.749737 containerd[1905]: time="2025-01-13T20:38:04.749707131Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:04.749737 containerd[1905]: time="2025-01-13T20:38:04.749721313Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:04.750204 kubelet[2385]: I0113 20:38:04.750179 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869" Jan 13 20:38:04.750929 containerd[1905]: time="2025-01-13T20:38:04.750906455Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:04.751653 containerd[1905]: time="2025-01-13T20:38:04.751622527Z" level=info msg="Ensure that sandbox 653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869 in task-service has been cleanup successfully" Jan 13 20:38:04.754075 containerd[1905]: time="2025-01-13T20:38:04.752469886Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:04.754075 containerd[1905]: time="2025-01-13T20:38:04.752565499Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:04.754075 containerd[1905]: time="2025-01-13T20:38:04.752580534Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:04.754075 containerd[1905]: time="2025-01-13T20:38:04.752666990Z" level=info msg="TearDown network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" successfully" Jan 13 20:38:04.754075 containerd[1905]: time="2025-01-13T20:38:04.752680606Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" returns successfully" Jan 13 20:38:04.755432 systemd[1]: run-netns-cni\x2dfe2b7654\x2de255\x2d8eae\x2df8e5\x2d59718c04ccb5.mount: Deactivated successfully. Jan 13 20:38:04.756921 containerd[1905]: time="2025-01-13T20:38:04.756893486Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:04.757532 containerd[1905]: time="2025-01-13T20:38:04.756951000Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:04.757532 containerd[1905]: time="2025-01-13T20:38:04.757427711Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:04.757532 containerd[1905]: time="2025-01-13T20:38:04.757447038Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:04.757532 containerd[1905]: time="2025-01-13T20:38:04.757471532Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:04.757532 containerd[1905]: time="2025-01-13T20:38:04.757483593Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:04.759018 containerd[1905]: time="2025-01-13T20:38:04.758939197Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:04.759083 containerd[1905]: time="2025-01-13T20:38:04.759068597Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:04.759123 containerd[1905]: time="2025-01-13T20:38:04.759085116Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:04.759181 containerd[1905]: time="2025-01-13T20:38:04.759163579Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:04.759266 containerd[1905]: time="2025-01-13T20:38:04.759243885Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:04.759314 containerd[1905]: time="2025-01-13T20:38:04.759261866Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:04.760763 containerd[1905]: time="2025-01-13T20:38:04.760736323Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:04.760871 containerd[1905]: time="2025-01-13T20:38:04.760842782Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:04.760871 containerd[1905]: time="2025-01-13T20:38:04.760858698Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:04.760952 containerd[1905]: time="2025-01-13T20:38:04.760931221Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:04.761032 containerd[1905]: time="2025-01-13T20:38:04.761009341Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:04.761080 containerd[1905]: time="2025-01-13T20:38:04.761028327Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:04.762407 containerd[1905]: time="2025-01-13T20:38:04.762128216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:8,}" Jan 13 20:38:04.762643 containerd[1905]: time="2025-01-13T20:38:04.762618467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:4,}" Jan 13 20:38:05.014124 containerd[1905]: time="2025-01-13T20:38:05.013991723Z" level=error msg="Failed to destroy network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.016304 containerd[1905]: time="2025-01-13T20:38:05.014752463Z" level=error msg="encountered an error cleaning up failed sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.016304 containerd[1905]: time="2025-01-13T20:38:05.014847088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.017283 kubelet[2385]: E0113 20:38:05.016810 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.017283 kubelet[2385]: E0113 20:38:05.016878 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:05.017283 kubelet[2385]: E0113 20:38:05.016917 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:05.017545 kubelet[2385]: E0113 20:38:05.016985 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:05.052088 containerd[1905]: time="2025-01-13T20:38:05.052034074Z" level=error msg="Failed to destroy network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.052950 containerd[1905]: time="2025-01-13T20:38:05.052912205Z" level=error msg="encountered an error cleaning up failed sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.053051 containerd[1905]: time="2025-01-13T20:38:05.052987366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.053601 kubelet[2385]: E0113 20:38:05.053255 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.053601 kubelet[2385]: E0113 20:38:05.053321 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:05.053601 kubelet[2385]: E0113 20:38:05.053349 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:05.053741 kubelet[2385]: E0113 20:38:05.053413 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:05.393460 kubelet[2385]: E0113 20:38:05.393336 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:05.623305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d-shm.mount: Deactivated successfully. Jan 13 20:38:05.624171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0-shm.mount: Deactivated successfully. Jan 13 20:38:05.763839 kubelet[2385]: I0113 20:38:05.763370 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d" Jan 13 20:38:05.764375 containerd[1905]: time="2025-01-13T20:38:05.764345651Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" Jan 13 20:38:05.765493 containerd[1905]: time="2025-01-13T20:38:05.765254671Z" level=info msg="Ensure that sandbox eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d in task-service has been cleanup successfully" Jan 13 20:38:05.769174 containerd[1905]: time="2025-01-13T20:38:05.769140759Z" level=info msg="TearDown network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" successfully" Jan 13 20:38:05.769614 systemd[1]: run-netns-cni\x2d5d4d9c1b\x2df658\x2dac41\x2d0692\x2db7717f1eafb9.mount: Deactivated successfully. Jan 13 20:38:05.773961 containerd[1905]: time="2025-01-13T20:38:05.773458368Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" returns successfully" Jan 13 20:38:05.775062 kubelet[2385]: I0113 20:38:05.775032 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0" Jan 13 20:38:05.776371 containerd[1905]: time="2025-01-13T20:38:05.776127945Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:05.776371 containerd[1905]: time="2025-01-13T20:38:05.776244766Z" level=info msg="TearDown network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" successfully" Jan 13 20:38:05.776371 containerd[1905]: time="2025-01-13T20:38:05.776260790Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" returns successfully" Jan 13 20:38:05.776371 containerd[1905]: time="2025-01-13T20:38:05.776134725Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" Jan 13 20:38:05.777102 containerd[1905]: time="2025-01-13T20:38:05.776511829Z" level=info msg="Ensure that sandbox 95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0 in task-service has been cleanup successfully" Jan 13 20:38:05.779873 containerd[1905]: time="2025-01-13T20:38:05.777204747Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:05.779746 systemd[1]: run-netns-cni\x2d4aa24cab\x2db4ac\x2df4a2\x2d89ec\x2d3d7830623cc2.mount: Deactivated successfully. Jan 13 20:38:05.780484 containerd[1905]: time="2025-01-13T20:38:05.780347160Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:05.780657 containerd[1905]: time="2025-01-13T20:38:05.780375554Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:05.780913 containerd[1905]: time="2025-01-13T20:38:05.780404392Z" level=info msg="TearDown network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" successfully" Jan 13 20:38:05.781199 containerd[1905]: time="2025-01-13T20:38:05.781177129Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" returns successfully" Jan 13 20:38:05.781758 containerd[1905]: time="2025-01-13T20:38:05.781724912Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:05.782592 containerd[1905]: time="2025-01-13T20:38:05.781863357Z" level=info msg="TearDown network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" successfully" Jan 13 20:38:05.782592 containerd[1905]: time="2025-01-13T20:38:05.781880979Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" returns successfully" Jan 13 20:38:05.782592 containerd[1905]: time="2025-01-13T20:38:05.781979258Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:05.782592 containerd[1905]: time="2025-01-13T20:38:05.782051953Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:05.782592 containerd[1905]: time="2025-01-13T20:38:05.782063631Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783169985Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783261262Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783275959Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783340983Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783407293Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783418316Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783754417Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783865821Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783882049Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.783948673Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:05.784032 containerd[1905]: time="2025-01-13T20:38:05.784033597Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:05.784452 containerd[1905]: time="2025-01-13T20:38:05.784045894Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:05.784452 containerd[1905]: time="2025-01-13T20:38:05.784406959Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:05.784539 containerd[1905]: time="2025-01-13T20:38:05.784488544Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:05.784539 containerd[1905]: time="2025-01-13T20:38:05.784502033Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:05.784620 containerd[1905]: time="2025-01-13T20:38:05.784561648Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:05.784665 containerd[1905]: time="2025-01-13T20:38:05.784629087Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:05.784665 containerd[1905]: time="2025-01-13T20:38:05.784643785Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:05.786248 containerd[1905]: time="2025-01-13T20:38:05.785316260Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:05.786248 containerd[1905]: time="2025-01-13T20:38:05.785472185Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:05.786248 containerd[1905]: time="2025-01-13T20:38:05.785487989Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:05.786248 containerd[1905]: time="2025-01-13T20:38:05.785339324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:5,}" Jan 13 20:38:05.787270 containerd[1905]: time="2025-01-13T20:38:05.787102009Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:05.787270 containerd[1905]: time="2025-01-13T20:38:05.787194765Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:05.787270 containerd[1905]: time="2025-01-13T20:38:05.787209333Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:05.788929 containerd[1905]: time="2025-01-13T20:38:05.788903663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:9,}" Jan 13 20:38:05.981079 containerd[1905]: time="2025-01-13T20:38:05.980938905Z" level=error msg="Failed to destroy network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.981945 containerd[1905]: time="2025-01-13T20:38:05.981875410Z" level=error msg="encountered an error cleaning up failed sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.982391 containerd[1905]: time="2025-01-13T20:38:05.982214654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.984129 kubelet[2385]: E0113 20:38:05.984031 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.984129 kubelet[2385]: E0113 20:38:05.984101 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:05.984129 kubelet[2385]: E0113 20:38:05.984131 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:05.984439 kubelet[2385]: E0113 20:38:05.984190 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:05.991792 containerd[1905]: time="2025-01-13T20:38:05.991640731Z" level=error msg="Failed to destroy network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.992498 containerd[1905]: time="2025-01-13T20:38:05.992452035Z" level=error msg="encountered an error cleaning up failed sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.992586 containerd[1905]: time="2025-01-13T20:38:05.992529245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.992939 kubelet[2385]: E0113 20:38:05.992907 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:05.993342 kubelet[2385]: E0113 20:38:05.993200 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:05.993342 kubelet[2385]: E0113 20:38:05.993243 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:05.993680 kubelet[2385]: E0113 20:38:05.993591 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:06.393695 kubelet[2385]: E0113 20:38:06.393654 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:06.629666 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1-shm.mount: Deactivated successfully. Jan 13 20:38:06.631160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225155369.mount: Deactivated successfully. Jan 13 20:38:06.674661 containerd[1905]: time="2025-01-13T20:38:06.674524173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:06.675770 containerd[1905]: time="2025-01-13T20:38:06.675645195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:38:06.677108 containerd[1905]: time="2025-01-13T20:38:06.677040876Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:06.681852 containerd[1905]: time="2025-01-13T20:38:06.681652741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:06.682740 containerd[1905]: time="2025-01-13T20:38:06.682698524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.074382343s" Jan 13 20:38:06.683063 containerd[1905]: time="2025-01-13T20:38:06.682737911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:38:06.712460 containerd[1905]: time="2025-01-13T20:38:06.712207495Z" level=info msg="CreateContainer within sandbox \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:38:06.741178 containerd[1905]: time="2025-01-13T20:38:06.741069492Z" level=info msg="CreateContainer within sandbox \"ea5aa27af7e57bfde8de1fb85615d49abb44e9a5b42c754397c72695bc78ee45\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f78774266631a6cdad57a3d4016089774ca9b84922d2e4798c107ac559d8d304\"" Jan 13 20:38:06.741724 containerd[1905]: time="2025-01-13T20:38:06.741697807Z" level=info msg="StartContainer for \"f78774266631a6cdad57a3d4016089774ca9b84922d2e4798c107ac559d8d304\"" Jan 13 20:38:06.789235 kubelet[2385]: I0113 20:38:06.789205 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493" Jan 13 20:38:06.790930 containerd[1905]: time="2025-01-13T20:38:06.790891124Z" level=info msg="StopPodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\"" Jan 13 20:38:06.791714 containerd[1905]: time="2025-01-13T20:38:06.791504093Z" level=info msg="Ensure that sandbox 28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493 in task-service has been cleanup successfully" Jan 13 20:38:06.793461 containerd[1905]: time="2025-01-13T20:38:06.793425971Z" level=info msg="TearDown network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" successfully" Jan 13 20:38:06.793461 containerd[1905]: time="2025-01-13T20:38:06.793458192Z" level=info msg="StopPodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" returns successfully" Jan 13 20:38:06.795298 containerd[1905]: time="2025-01-13T20:38:06.795270599Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" Jan 13 20:38:06.795452 containerd[1905]: time="2025-01-13T20:38:06.795370555Z" level=info msg="TearDown network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" successfully" Jan 13 20:38:06.795452 containerd[1905]: time="2025-01-13T20:38:06.795385401Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" returns successfully" Jan 13 20:38:06.796057 systemd[1]: run-netns-cni\x2d8f132420\x2d5015\x2d2848\x2deb51\x2df20bfcc79232.mount: Deactivated successfully. Jan 13 20:38:06.797162 containerd[1905]: time="2025-01-13T20:38:06.797131069Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:06.797500 containerd[1905]: time="2025-01-13T20:38:06.797248174Z" level=info msg="TearDown network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" successfully" Jan 13 20:38:06.797500 containerd[1905]: time="2025-01-13T20:38:06.797266940Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" returns successfully" Jan 13 20:38:06.797848 containerd[1905]: time="2025-01-13T20:38:06.797612946Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:06.799231 containerd[1905]: time="2025-01-13T20:38:06.798374548Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:06.799689 containerd[1905]: time="2025-01-13T20:38:06.799663991Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:06.800335 containerd[1905]: time="2025-01-13T20:38:06.800030350Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:06.800335 containerd[1905]: time="2025-01-13T20:38:06.800174869Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:06.800335 containerd[1905]: time="2025-01-13T20:38:06.800192171Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:06.800792 containerd[1905]: time="2025-01-13T20:38:06.800755604Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:06.800884 containerd[1905]: time="2025-01-13T20:38:06.800862835Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:06.800884 containerd[1905]: time="2025-01-13T20:38:06.800878144Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:06.802878 containerd[1905]: time="2025-01-13T20:38:06.802712287Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:06.803542 containerd[1905]: time="2025-01-13T20:38:06.803485951Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:06.803542 containerd[1905]: time="2025-01-13T20:38:06.803510950Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:06.804862 kubelet[2385]: I0113 20:38:06.804809 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1" Jan 13 20:38:06.805907 containerd[1905]: time="2025-01-13T20:38:06.805658347Z" level=info msg="StopPodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\"" Jan 13 20:38:06.805907 containerd[1905]: time="2025-01-13T20:38:06.805689862Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:06.805907 containerd[1905]: time="2025-01-13T20:38:06.805769779Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:06.805907 containerd[1905]: time="2025-01-13T20:38:06.805783117Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:06.806194 containerd[1905]: time="2025-01-13T20:38:06.805917358Z" level=info msg="Ensure that sandbox 7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1 in task-service has been cleanup successfully" Jan 13 20:38:06.806194 containerd[1905]: time="2025-01-13T20:38:06.806160345Z" level=info msg="TearDown network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" successfully" Jan 13 20:38:06.806194 containerd[1905]: time="2025-01-13T20:38:06.806177287Z" level=info msg="StopPodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" returns successfully" Jan 13 20:38:06.807216 containerd[1905]: time="2025-01-13T20:38:06.806788447Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:06.807454 containerd[1905]: time="2025-01-13T20:38:06.807350934Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:06.807454 containerd[1905]: time="2025-01-13T20:38:06.807395730Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:06.807759 containerd[1905]: time="2025-01-13T20:38:06.807652427Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" Jan 13 20:38:06.807899 containerd[1905]: time="2025-01-13T20:38:06.807866517Z" level=info msg="TearDown network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" successfully" Jan 13 20:38:06.808076 containerd[1905]: time="2025-01-13T20:38:06.807989560Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" returns successfully" Jan 13 20:38:06.808325 containerd[1905]: time="2025-01-13T20:38:06.808292991Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:06.808410 containerd[1905]: time="2025-01-13T20:38:06.808390001Z" level=info msg="TearDown network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" successfully" Jan 13 20:38:06.808478 containerd[1905]: time="2025-01-13T20:38:06.808411124Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" returns successfully" Jan 13 20:38:06.808527 containerd[1905]: time="2025-01-13T20:38:06.808479428Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:06.808569 containerd[1905]: time="2025-01-13T20:38:06.808553100Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:06.808607 containerd[1905]: time="2025-01-13T20:38:06.808565959Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:06.809068 containerd[1905]: time="2025-01-13T20:38:06.809042504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:10,}" Jan 13 20:38:06.809418 containerd[1905]: time="2025-01-13T20:38:06.809387504Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:06.809516 containerd[1905]: time="2025-01-13T20:38:06.809476546Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:06.809516 containerd[1905]: time="2025-01-13T20:38:06.809491467Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:06.809790 containerd[1905]: time="2025-01-13T20:38:06.809768182Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:06.810059 containerd[1905]: time="2025-01-13T20:38:06.809914772Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:06.810059 containerd[1905]: time="2025-01-13T20:38:06.810055266Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:06.810577 containerd[1905]: time="2025-01-13T20:38:06.810414168Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:06.810577 containerd[1905]: time="2025-01-13T20:38:06.810497677Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:06.810577 containerd[1905]: time="2025-01-13T20:38:06.810513161Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:06.811241 containerd[1905]: time="2025-01-13T20:38:06.811166869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:6,}" Jan 13 20:38:06.914420 systemd[1]: Started cri-containerd-f78774266631a6cdad57a3d4016089774ca9b84922d2e4798c107ac559d8d304.scope - libcontainer container f78774266631a6cdad57a3d4016089774ca9b84922d2e4798c107ac559d8d304. Jan 13 20:38:07.000684 containerd[1905]: time="2025-01-13T20:38:06.999943026Z" level=info msg="StartContainer for \"f78774266631a6cdad57a3d4016089774ca9b84922d2e4798c107ac559d8d304\" returns successfully" Jan 13 20:38:07.047300 containerd[1905]: time="2025-01-13T20:38:07.047158232Z" level=error msg="Failed to destroy network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.047903 containerd[1905]: time="2025-01-13T20:38:07.047668108Z" level=error msg="encountered an error cleaning up failed sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.047903 containerd[1905]: time="2025-01-13T20:38:07.047746649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.052172 kubelet[2385]: E0113 20:38:07.051728 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.052172 kubelet[2385]: E0113 20:38:07.051792 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:07.052172 kubelet[2385]: E0113 20:38:07.051836 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-9kjt9" Jan 13 20:38:07.052438 kubelet[2385]: E0113 20:38:07.051921 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-9kjt9_default(6504c9c9-bb3a-4e46-ac94-ffc964a9dc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-9kjt9" podUID="6504c9c9-bb3a-4e46-ac94-ffc964a9dc32" Jan 13 20:38:07.054207 containerd[1905]: time="2025-01-13T20:38:07.054070195Z" level=error msg="Failed to destroy network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.054727 containerd[1905]: time="2025-01-13T20:38:07.054603504Z" level=error msg="encountered an error cleaning up failed sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.054727 containerd[1905]: time="2025-01-13T20:38:07.054676992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.055375 kubelet[2385]: E0113 20:38:07.055157 2385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:38:07.055375 kubelet[2385]: E0113 20:38:07.055218 2385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:07.055375 kubelet[2385]: E0113 20:38:07.055276 2385 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdc6r" Jan 13 20:38:07.055543 kubelet[2385]: E0113 20:38:07.055339 2385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdc6r_calico-system(ebe7be58-4bc5-48be-801d-57bfd992d603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdc6r" podUID="ebe7be58-4bc5-48be-801d-57bfd992d603" Jan 13 20:38:07.154839 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:38:07.154925 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:38:07.396374 kubelet[2385]: E0113 20:38:07.396321 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:07.625599 systemd[1]: run-netns-cni\x2df5dd517c\x2d6239\x2de263\x2ddfa2\x2dbcc7a1bfb705.mount: Deactivated successfully. Jan 13 20:38:07.813385 kubelet[2385]: I0113 20:38:07.813265 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b" Jan 13 20:38:07.816512 containerd[1905]: time="2025-01-13T20:38:07.816366102Z" level=info msg="StopPodSandbox for \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\"" Jan 13 20:38:07.822034 containerd[1905]: time="2025-01-13T20:38:07.816611450Z" level=info msg="Ensure that sandbox 308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b in task-service has been cleanup successfully" Jan 13 20:38:07.824343 containerd[1905]: time="2025-01-13T20:38:07.824297765Z" level=info msg="TearDown network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\" successfully" Jan 13 20:38:07.824343 containerd[1905]: time="2025-01-13T20:38:07.824338718Z" level=info msg="StopPodSandbox for \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\" returns successfully" Jan 13 20:38:07.827483 systemd[1]: run-netns-cni\x2d0d5c6afe\x2d0540\x2d0842\x2de6ea\x2d17f3ff8e0a69.mount: Deactivated successfully. Jan 13 20:38:07.832270 containerd[1905]: time="2025-01-13T20:38:07.830619190Z" level=info msg="StopPodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\"" Jan 13 20:38:07.832270 containerd[1905]: time="2025-01-13T20:38:07.830722910Z" level=info msg="TearDown network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" successfully" Jan 13 20:38:07.832650 containerd[1905]: time="2025-01-13T20:38:07.830739584Z" level=info msg="StopPodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" returns successfully" Jan 13 20:38:07.836094 containerd[1905]: time="2025-01-13T20:38:07.835260321Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" Jan 13 20:38:07.836094 containerd[1905]: time="2025-01-13T20:38:07.835379973Z" level=info msg="TearDown network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" successfully" Jan 13 20:38:07.836094 containerd[1905]: time="2025-01-13T20:38:07.835395074Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" returns successfully" Jan 13 20:38:07.837841 containerd[1905]: time="2025-01-13T20:38:07.835786846Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:07.838054 containerd[1905]: time="2025-01-13T20:38:07.838032309Z" level=info msg="TearDown network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" successfully" Jan 13 20:38:07.846396 containerd[1905]: time="2025-01-13T20:38:07.845072936Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" returns successfully" Jan 13 20:38:07.849225 containerd[1905]: time="2025-01-13T20:38:07.848165679Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:07.849484 containerd[1905]: time="2025-01-13T20:38:07.849272206Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:07.849484 containerd[1905]: time="2025-01-13T20:38:07.849294174Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:07.851251 containerd[1905]: time="2025-01-13T20:38:07.851204014Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:07.851383 containerd[1905]: time="2025-01-13T20:38:07.851304557Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:07.851383 containerd[1905]: time="2025-01-13T20:38:07.851320901Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:07.853819 containerd[1905]: time="2025-01-13T20:38:07.853709237Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:07.853897 containerd[1905]: time="2025-01-13T20:38:07.853833757Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:07.853897 containerd[1905]: time="2025-01-13T20:38:07.853849684Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:07.855200 containerd[1905]: time="2025-01-13T20:38:07.854859134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:7,}" Jan 13 20:38:07.863706 kubelet[2385]: I0113 20:38:07.863674 2385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f" Jan 13 20:38:07.865012 containerd[1905]: time="2025-01-13T20:38:07.864955778Z" level=info msg="StopPodSandbox for \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\"" Jan 13 20:38:07.866865 containerd[1905]: time="2025-01-13T20:38:07.865441765Z" level=info msg="Ensure that sandbox e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f in task-service has been cleanup successfully" Jan 13 20:38:07.870139 containerd[1905]: time="2025-01-13T20:38:07.870085832Z" level=info msg="TearDown network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\" successfully" Jan 13 20:38:07.870369 containerd[1905]: time="2025-01-13T20:38:07.870346703Z" level=info msg="StopPodSandbox for \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\" returns successfully" Jan 13 20:38:07.874363 systemd[1]: run-netns-cni\x2d79f71e4d\x2d64bd\x2d0c6c\x2da223\x2d69ebeaeb8720.mount: Deactivated successfully. Jan 13 20:38:07.877466 containerd[1905]: time="2025-01-13T20:38:07.877426664Z" level=info msg="StopPodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\"" Jan 13 20:38:07.877696 containerd[1905]: time="2025-01-13T20:38:07.877675388Z" level=info msg="TearDown network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" successfully" Jan 13 20:38:07.877851 containerd[1905]: time="2025-01-13T20:38:07.877815103Z" level=info msg="StopPodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" returns successfully" Jan 13 20:38:07.884996 containerd[1905]: time="2025-01-13T20:38:07.884855396Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" Jan 13 20:38:07.890830 containerd[1905]: time="2025-01-13T20:38:07.885530945Z" level=info msg="TearDown network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" successfully" Jan 13 20:38:07.891300 containerd[1905]: time="2025-01-13T20:38:07.887939540Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" returns successfully" Jan 13 20:38:07.906820 containerd[1905]: time="2025-01-13T20:38:07.893975408Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:07.907336 containerd[1905]: time="2025-01-13T20:38:07.907283455Z" level=info msg="TearDown network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" successfully" Jan 13 20:38:07.918611 containerd[1905]: time="2025-01-13T20:38:07.918528107Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" returns successfully" Jan 13 20:38:07.936644 containerd[1905]: time="2025-01-13T20:38:07.935085699Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:07.936644 containerd[1905]: time="2025-01-13T20:38:07.935238033Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:07.936644 containerd[1905]: time="2025-01-13T20:38:07.935251655Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:07.940049 containerd[1905]: time="2025-01-13T20:38:07.940002794Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:07.940161 containerd[1905]: time="2025-01-13T20:38:07.940144293Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:07.940229 containerd[1905]: time="2025-01-13T20:38:07.940161117Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:07.949843 containerd[1905]: time="2025-01-13T20:38:07.942106017Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:07.951383 containerd[1905]: time="2025-01-13T20:38:07.950106242Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:07.951535 containerd[1905]: time="2025-01-13T20:38:07.951392621Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:07.966262 containerd[1905]: time="2025-01-13T20:38:07.966209468Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:07.972921 containerd[1905]: time="2025-01-13T20:38:07.967636674Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:07.974086 containerd[1905]: time="2025-01-13T20:38:07.973934910Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:07.983543 containerd[1905]: time="2025-01-13T20:38:07.983495298Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:07.984136 containerd[1905]: time="2025-01-13T20:38:07.983641669Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:07.984136 containerd[1905]: time="2025-01-13T20:38:07.983658595Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:07.989144 containerd[1905]: time="2025-01-13T20:38:07.986727250Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:07.991033 containerd[1905]: time="2025-01-13T20:38:07.989953666Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:07.992161 containerd[1905]: time="2025-01-13T20:38:07.991917175Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:07.994844 containerd[1905]: time="2025-01-13T20:38:07.994157457Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:07.995207 containerd[1905]: time="2025-01-13T20:38:07.995023210Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:07.995207 containerd[1905]: time="2025-01-13T20:38:07.995050217Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:07.999289 containerd[1905]: time="2025-01-13T20:38:07.999248604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:11,}" Jan 13 20:38:08.396941 kubelet[2385]: E0113 20:38:08.396887 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:08.610638 systemd-networkd[1776]: calia2c8c0f0a94: Link UP Jan 13 20:38:08.613198 systemd-networkd[1776]: calia2c8c0f0a94: Gained carrier Jan 13 20:38:08.615679 (udev-worker)[3454]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:08.653925 kubelet[2385]: I0113 20:38:08.648507 2385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-rd6lw" podStartSLOduration=5.796275004 podStartE2EDuration="24.648405813s" podCreationTimestamp="2025-01-13 20:37:44 +0000 UTC" firstStartedPulling="2025-01-13 20:37:47.83136284 +0000 UTC m=+3.853511687" lastFinishedPulling="2025-01-13 20:38:06.683493636 +0000 UTC m=+22.705642496" observedRunningTime="2025-01-13 20:38:07.866452409 +0000 UTC m=+23.888601276" watchObservedRunningTime="2025-01-13 20:38:08.648405813 +0000 UTC m=+24.670554680" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.079 [INFO][3508] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.286 [INFO][3508] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.143-k8s-csi--node--driver--rdc6r-eth0 csi-node-driver- calico-system ebe7be58-4bc5-48be-801d-57bfd992d603 990 0 2025-01-13 20:37:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.25.143 csi-node-driver-rdc6r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia2c8c0f0a94 [] []}} ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.286 [INFO][3508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.487 [INFO][3523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" HandleID="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Workload="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.524 [INFO][3523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" HandleID="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Workload="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004082a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.25.143", "pod":"csi-node-driver-rdc6r", "timestamp":"2025-01-13 20:38:08.487184432 +0000 UTC"}, Hostname:"172.31.25.143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.524 [INFO][3523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.525 [INFO][3523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.525 [INFO][3523] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.143' Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.532 [INFO][3523] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.540 [INFO][3523] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.546 [INFO][3523] ipam/ipam.go 489: Trying affinity for 192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.548 [INFO][3523] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.561 [INFO][3523] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.561 [INFO][3523] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.564 [INFO][3523] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.572 [INFO][3523] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.578 [INFO][3523] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.65/26] block=192.168.52.64/26 handle="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.578 [INFO][3523] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.65/26] handle="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" host="172.31.25.143" Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.578 [INFO][3523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:38:08.671602 containerd[1905]: 2025-01-13 20:38:08.578 [INFO][3523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.65/26] IPv6=[] ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" HandleID="k8s-pod-network.105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Workload="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.673735 containerd[1905]: 2025-01-13 20:38:08.581 [INFO][3508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-csi--node--driver--rdc6r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebe7be58-4bc5-48be-801d-57bfd992d603", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"", Pod:"csi-node-driver-rdc6r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2c8c0f0a94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:08.673735 containerd[1905]: 2025-01-13 20:38:08.581 [INFO][3508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.65/32] ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.673735 containerd[1905]: 2025-01-13 20:38:08.581 [INFO][3508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2c8c0f0a94 ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.673735 containerd[1905]: 2025-01-13 20:38:08.614 [INFO][3508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.673735 containerd[1905]: 2025-01-13 20:38:08.614 [INFO][3508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-csi--node--driver--rdc6r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebe7be58-4bc5-48be-801d-57bfd992d603", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee", Pod:"csi-node-driver-rdc6r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2c8c0f0a94", MAC:"a2:d9:0a:0e:aa:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:08.673735 containerd[1905]: 2025-01-13 20:38:08.653 [INFO][3508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee" Namespace="calico-system" Pod="csi-node-driver-rdc6r" WorkloadEndpoint="172.31.25.143-k8s-csi--node--driver--rdc6r-eth0" Jan 13 20:38:08.681410 (udev-worker)[3556]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:08.686932 systemd-networkd[1776]: califf3ac2297d1: Link UP Jan 13 20:38:08.687299 systemd-networkd[1776]: califf3ac2297d1: Gained carrier Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.084 [INFO][3494] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.286 [INFO][3494] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0 nginx-deployment-6d5f899847- default 6504c9c9-bb3a-4e46-ac94-ffc964a9dc32 1094 0 2025-01-13 20:38:00 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.143 nginx-deployment-6d5f899847-9kjt9 eth0 default [] [] [kns.default ksa.default.default] califf3ac2297d1 [] []}} ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.286 [INFO][3494] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.488 [INFO][3522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" HandleID="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Workload="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.527 [INFO][3522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" HandleID="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Workload="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b2980), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.143", "pod":"nginx-deployment-6d5f899847-9kjt9", "timestamp":"2025-01-13 20:38:08.487870523 +0000 UTC"}, Hostname:"172.31.25.143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.527 [INFO][3522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.578 [INFO][3522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.578 [INFO][3522] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.143' Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.581 [INFO][3522] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.587 [INFO][3522] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.595 [INFO][3522] ipam/ipam.go 489: Trying affinity for 192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.598 [INFO][3522] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.601 [INFO][3522] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.602 [INFO][3522] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.615 [INFO][3522] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857 Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.635 [INFO][3522] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.654 [INFO][3522] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.66/26] block=192.168.52.64/26 handle="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.654 [INFO][3522] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.66/26] handle="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" host="172.31.25.143" Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.654 [INFO][3522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:38:08.723166 containerd[1905]: 2025-01-13 20:38:08.654 [INFO][3522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.66/26] IPv6=[] ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" HandleID="k8s-pod-network.87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Workload="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.725555 containerd[1905]: 2025-01-13 20:38:08.677 [INFO][3494] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"6504c9c9-bb3a-4e46-ac94-ffc964a9dc32", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"", Pod:"nginx-deployment-6d5f899847-9kjt9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califf3ac2297d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:08.725555 containerd[1905]: 2025-01-13 20:38:08.677 [INFO][3494] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.66/32] ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.725555 containerd[1905]: 2025-01-13 20:38:08.677 [INFO][3494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf3ac2297d1 ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.725555 containerd[1905]: 2025-01-13 20:38:08.688 [INFO][3494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.725555 containerd[1905]: 2025-01-13 20:38:08.692 [INFO][3494] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"6504c9c9-bb3a-4e46-ac94-ffc964a9dc32", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857", Pod:"nginx-deployment-6d5f899847-9kjt9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califf3ac2297d1", MAC:"c2:8b:ea:e7:78:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:08.725555 containerd[1905]: 2025-01-13 20:38:08.706 [INFO][3494] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857" Namespace="default" Pod="nginx-deployment-6d5f899847-9kjt9" WorkloadEndpoint="172.31.25.143-k8s-nginx--deployment--6d5f899847--9kjt9-eth0" Jan 13 20:38:08.775277 containerd[1905]: time="2025-01-13T20:38:08.774071132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:08.776341 containerd[1905]: time="2025-01-13T20:38:08.776206190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:08.776341 containerd[1905]: time="2025-01-13T20:38:08.776298905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:08.776510 containerd[1905]: time="2025-01-13T20:38:08.776439733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:08.852760 systemd[1]: run-containerd-runc-k8s.io-105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee-runc.bZi9Aa.mount: Deactivated successfully. Jan 13 20:38:08.877643 containerd[1905]: time="2025-01-13T20:38:08.876447770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:08.877643 containerd[1905]: time="2025-01-13T20:38:08.876517803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:08.877643 containerd[1905]: time="2025-01-13T20:38:08.876538214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:08.877643 containerd[1905]: time="2025-01-13T20:38:08.876668388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:08.888147 systemd[1]: Started cri-containerd-105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee.scope - libcontainer container 105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee. Jan 13 20:38:08.977748 systemd[1]: Started cri-containerd-87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857.scope - libcontainer container 87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857. Jan 13 20:38:09.025839 containerd[1905]: time="2025-01-13T20:38:09.025751685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdc6r,Uid:ebe7be58-4bc5-48be-801d-57bfd992d603,Namespace:calico-system,Attempt:11,} returns sandbox id \"105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee\"" Jan 13 20:38:09.030065 containerd[1905]: time="2025-01-13T20:38:09.029952140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:38:09.142749 containerd[1905]: time="2025-01-13T20:38:09.142708418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-9kjt9,Uid:6504c9c9-bb3a-4e46-ac94-ffc964a9dc32,Namespace:default,Attempt:7,} returns sandbox id \"87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857\"" Jan 13 20:38:09.359305 kernel: bpftool[3785]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:38:09.399830 kubelet[2385]: E0113 20:38:09.398529 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:09.639210 systemd-networkd[1776]: vxlan.calico: Link UP Jan 13 20:38:09.639225 systemd-networkd[1776]: vxlan.calico: Gained carrier Jan 13 20:38:09.861949 systemd-networkd[1776]: calia2c8c0f0a94: Gained IPv6LL Jan 13 20:38:10.357598 containerd[1905]: time="2025-01-13T20:38:10.357545419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:10.358698 containerd[1905]: time="2025-01-13T20:38:10.358589157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:38:10.359837 containerd[1905]: time="2025-01-13T20:38:10.359695111Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:10.362015 containerd[1905]: time="2025-01-13T20:38:10.361962705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:10.362815 containerd[1905]: time="2025-01-13T20:38:10.362658947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.332645061s" Jan 13 20:38:10.362815 containerd[1905]: time="2025-01-13T20:38:10.362695804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:38:10.364019 containerd[1905]: time="2025-01-13T20:38:10.363987485Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:38:10.364958 containerd[1905]: time="2025-01-13T20:38:10.364930532Z" level=info msg="CreateContainer within sandbox \"105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:38:10.386979 containerd[1905]: time="2025-01-13T20:38:10.386924303Z" level=info msg="CreateContainer within sandbox \"105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9c4d7cb586498f2adfaca0732c3a01c1d77087b298332e917c79adad35b8b76f\"" Jan 13 20:38:10.387559 containerd[1905]: time="2025-01-13T20:38:10.387524612Z" level=info msg="StartContainer for \"9c4d7cb586498f2adfaca0732c3a01c1d77087b298332e917c79adad35b8b76f\"" Jan 13 20:38:10.402684 kubelet[2385]: E0113 20:38:10.400532 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:10.430007 systemd[1]: Started cri-containerd-9c4d7cb586498f2adfaca0732c3a01c1d77087b298332e917c79adad35b8b76f.scope - libcontainer container 9c4d7cb586498f2adfaca0732c3a01c1d77087b298332e917c79adad35b8b76f. Jan 13 20:38:10.466785 containerd[1905]: time="2025-01-13T20:38:10.466631534Z" level=info msg="StartContainer for \"9c4d7cb586498f2adfaca0732c3a01c1d77087b298332e917c79adad35b8b76f\" returns successfully" Jan 13 20:38:10.567538 systemd-networkd[1776]: califf3ac2297d1: Gained IPv6LL Jan 13 20:38:11.143228 systemd-networkd[1776]: vxlan.calico: Gained IPv6LL Jan 13 20:38:11.402023 kubelet[2385]: E0113 20:38:11.401633 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:11.752886 update_engine[1888]: I20250113 20:38:11.750844 1888 update_attempter.cc:509] Updating boot flags... Jan 13 20:38:11.949823 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3454) Jan 13 20:38:12.271340 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3820) Jan 13 20:38:12.402424 kubelet[2385]: E0113 20:38:12.402389 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:12.602823 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3820) Jan 13 20:38:13.201551 ntpd[1875]: Listen normally on 8 vxlan.calico 192.168.52.64:123 Jan 13 20:38:13.201716 ntpd[1875]: Listen normally on 9 calia2c8c0f0a94 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:38:13.202222 ntpd[1875]: 13 Jan 20:38:13 ntpd[1875]: Listen normally on 8 vxlan.calico 192.168.52.64:123 Jan 13 20:38:13.202222 ntpd[1875]: 13 Jan 20:38:13 ntpd[1875]: Listen normally on 9 calia2c8c0f0a94 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:38:13.202222 ntpd[1875]: 13 Jan 20:38:13 ntpd[1875]: Listen normally on 10 califf3ac2297d1 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:38:13.202222 ntpd[1875]: 13 Jan 20:38:13 ntpd[1875]: Listen normally on 11 vxlan.calico [fe80::6445:c9ff:fe5c:25e5%5]:123 Jan 13 20:38:13.201777 ntpd[1875]: Listen normally on 10 califf3ac2297d1 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:38:13.201843 ntpd[1875]: Listen normally on 11 vxlan.calico [fe80::6445:c9ff:fe5c:25e5%5]:123 Jan 13 20:38:13.404258 kubelet[2385]: E0113 20:38:13.403969 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:13.949198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462010078.mount: Deactivated successfully. Jan 13 20:38:14.404435 kubelet[2385]: E0113 20:38:14.404396 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:15.405388 kubelet[2385]: E0113 20:38:15.405321 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:15.466935 containerd[1905]: time="2025-01-13T20:38:15.466882560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:15.468212 containerd[1905]: time="2025-01-13T20:38:15.468035389Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:38:15.469834 containerd[1905]: time="2025-01-13T20:38:15.469229595Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:15.473049 containerd[1905]: time="2025-01-13T20:38:15.471971327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:15.473049 containerd[1905]: time="2025-01-13T20:38:15.472882647Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.108857398s" Jan 13 20:38:15.473049 containerd[1905]: time="2025-01-13T20:38:15.472917164Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:38:15.505496 containerd[1905]: time="2025-01-13T20:38:15.504287371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:38:15.514733 containerd[1905]: time="2025-01-13T20:38:15.514689598Z" level=info msg="CreateContainer within sandbox \"87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:38:15.538081 containerd[1905]: time="2025-01-13T20:38:15.538039324Z" level=info msg="CreateContainer within sandbox \"87dc988cf878220acf590bef88d6cc6e93eac3c92bbed9db8763e12cd3783857\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ab10741dafccf626989b87aff1dfb72e583dd514be7ae3c6c123c1ba15bdfc24\"" Jan 13 20:38:15.538862 containerd[1905]: time="2025-01-13T20:38:15.538830310Z" level=info msg="StartContainer for \"ab10741dafccf626989b87aff1dfb72e583dd514be7ae3c6c123c1ba15bdfc24\"" Jan 13 20:38:15.612103 systemd[1]: run-containerd-runc-k8s.io-ab10741dafccf626989b87aff1dfb72e583dd514be7ae3c6c123c1ba15bdfc24-runc.b1bSZ1.mount: Deactivated successfully. Jan 13 20:38:15.627191 systemd[1]: Started cri-containerd-ab10741dafccf626989b87aff1dfb72e583dd514be7ae3c6c123c1ba15bdfc24.scope - libcontainer container ab10741dafccf626989b87aff1dfb72e583dd514be7ae3c6c123c1ba15bdfc24. Jan 13 20:38:15.670215 containerd[1905]: time="2025-01-13T20:38:15.670102815Z" level=info msg="StartContainer for \"ab10741dafccf626989b87aff1dfb72e583dd514be7ae3c6c123c1ba15bdfc24\" returns successfully" Jan 13 20:38:16.406070 kubelet[2385]: E0113 20:38:16.406019 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:17.210020 containerd[1905]: time="2025-01-13T20:38:17.209968237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:17.211942 containerd[1905]: time="2025-01-13T20:38:17.211772058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:38:17.214885 containerd[1905]: time="2025-01-13T20:38:17.214843651Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:17.218449 containerd[1905]: time="2025-01-13T20:38:17.218400443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.714073982s" Jan 13 20:38:17.218449 containerd[1905]: time="2025-01-13T20:38:17.218441700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:38:17.220199 containerd[1905]: time="2025-01-13T20:38:17.220167236Z" level=info msg="CreateContainer within sandbox \"105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:38:17.240491 containerd[1905]: time="2025-01-13T20:38:17.240428411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:17.242353 containerd[1905]: time="2025-01-13T20:38:17.242317944Z" level=info msg="CreateContainer within sandbox \"105451b6fa32147108d3f3d2da8e0d67d6379364ec2b56e75d22ec919a5fc4ee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f74ef270ccdba472bc93754c5aea9c2fc96a2a8cca50d173e3004204e44ad3ed\"" Jan 13 20:38:17.244557 containerd[1905]: time="2025-01-13T20:38:17.242920080Z" level=info msg="StartContainer for \"f74ef270ccdba472bc93754c5aea9c2fc96a2a8cca50d173e3004204e44ad3ed\"" Jan 13 20:38:17.322159 systemd[1]: Started cri-containerd-f74ef270ccdba472bc93754c5aea9c2fc96a2a8cca50d173e3004204e44ad3ed.scope - libcontainer container f74ef270ccdba472bc93754c5aea9c2fc96a2a8cca50d173e3004204e44ad3ed. Jan 13 20:38:17.390713 containerd[1905]: time="2025-01-13T20:38:17.390188930Z" level=info msg="StartContainer for \"f74ef270ccdba472bc93754c5aea9c2fc96a2a8cca50d173e3004204e44ad3ed\" returns successfully" Jan 13 20:38:17.406834 kubelet[2385]: E0113 20:38:17.406662 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:17.522668 kubelet[2385]: I0113 20:38:17.522466 2385 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:38:17.525012 kubelet[2385]: I0113 20:38:17.524624 2385 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:38:18.037892 kubelet[2385]: I0113 20:38:18.037850 2385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-9kjt9" podStartSLOduration=11.714527166 podStartE2EDuration="18.037790445s" podCreationTimestamp="2025-01-13 20:38:00 +0000 UTC" firstStartedPulling="2025-01-13 20:38:09.150190257 +0000 UTC m=+25.172339111" lastFinishedPulling="2025-01-13 20:38:15.473453536 +0000 UTC m=+31.495602390" observedRunningTime="2025-01-13 20:38:16.007426259 +0000 UTC m=+32.029575123" watchObservedRunningTime="2025-01-13 20:38:18.037790445 +0000 UTC m=+34.059939310" Jan 13 20:38:18.038125 kubelet[2385]: I0113 20:38:18.038085 2385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-rdc6r" podStartSLOduration=25.847828081 podStartE2EDuration="34.03805072s" podCreationTimestamp="2025-01-13 20:37:44 +0000 UTC" firstStartedPulling="2025-01-13 20:38:09.028555579 +0000 UTC m=+25.050704426" lastFinishedPulling="2025-01-13 20:38:17.218778211 +0000 UTC m=+33.240927065" observedRunningTime="2025-01-13 20:38:18.037675467 +0000 UTC m=+34.059824334" watchObservedRunningTime="2025-01-13 20:38:18.03805072 +0000 UTC m=+34.060199586" Jan 13 20:38:18.407450 kubelet[2385]: E0113 20:38:18.407344 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:19.408614 kubelet[2385]: E0113 20:38:19.408563 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:20.408781 kubelet[2385]: E0113 20:38:20.408721 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:21.409669 kubelet[2385]: E0113 20:38:21.409615 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:21.758510 kubelet[2385]: I0113 20:38:21.758204 2385 topology_manager.go:215] "Topology Admit Handler" podUID="c87659da-5935-4153-aaef-6810fb3156ad" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:38:21.788554 systemd[1]: Created slice kubepods-besteffort-podc87659da_5935_4153_aaef_6810fb3156ad.slice - libcontainer container kubepods-besteffort-podc87659da_5935_4153_aaef_6810fb3156ad.slice. Jan 13 20:38:21.842227 kubelet[2385]: I0113 20:38:21.841936 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c87659da-5935-4153-aaef-6810fb3156ad-data\") pod \"nfs-server-provisioner-0\" (UID: \"c87659da-5935-4153-aaef-6810fb3156ad\") " pod="default/nfs-server-provisioner-0" Jan 13 20:38:21.842227 kubelet[2385]: I0113 20:38:21.842120 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2j68\" (UniqueName: \"kubernetes.io/projected/c87659da-5935-4153-aaef-6810fb3156ad-kube-api-access-w2j68\") pod \"nfs-server-provisioner-0\" (UID: \"c87659da-5935-4153-aaef-6810fb3156ad\") " pod="default/nfs-server-provisioner-0" Jan 13 20:38:22.097939 containerd[1905]: time="2025-01-13T20:38:22.097890068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c87659da-5935-4153-aaef-6810fb3156ad,Namespace:default,Attempt:0,}" Jan 13 20:38:22.376184 systemd-networkd[1776]: cali60e51b789ff: Link UP Jan 13 20:38:22.378618 systemd-networkd[1776]: cali60e51b789ff: Gained carrier Jan 13 20:38:22.383730 (udev-worker)[4323]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.203 [INFO][4305] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.143-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default c87659da-5935-4153-aaef-6810fb3156ad 1233 0 2025-01-13 20:38:21 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.25.143 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.203 [INFO][4305] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.299 [INFO][4315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" HandleID="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Workload="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.314 [INFO][4315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" HandleID="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Workload="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002913b0), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.143", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:38:22.299308423 +0000 UTC"}, Hostname:"172.31.25.143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.314 [INFO][4315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.314 [INFO][4315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.314 [INFO][4315] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.143' Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.317 [INFO][4315] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.332 [INFO][4315] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.340 [INFO][4315] ipam/ipam.go 489: Trying affinity for 192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.343 [INFO][4315] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.347 [INFO][4315] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.347 [INFO][4315] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.350 [INFO][4315] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.358 [INFO][4315] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.367 [INFO][4315] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.67/26] block=192.168.52.64/26 handle="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.367 [INFO][4315] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.67/26] handle="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" host="172.31.25.143" Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.367 [INFO][4315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:38:22.399769 containerd[1905]: 2025-01-13 20:38:22.367 [INFO][4315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.67/26] IPv6=[] ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" HandleID="k8s-pod-network.0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Workload="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.405248 containerd[1905]: 2025-01-13 20:38:22.369 [INFO][4305] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c87659da-5935-4153-aaef-6810fb3156ad", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 38, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:22.405248 containerd[1905]: 2025-01-13 20:38:22.369 [INFO][4305] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.67/32] ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.405248 containerd[1905]: 2025-01-13 20:38:22.369 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.405248 containerd[1905]: 2025-01-13 20:38:22.379 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.405601 containerd[1905]: 2025-01-13 20:38:22.380 [INFO][4305] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c87659da-5935-4153-aaef-6810fb3156ad", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 38, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ce:a3:fb:76:fa:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:22.405601 containerd[1905]: 2025-01-13 20:38:22.397 [INFO][4305] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.143-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:38:22.418150 kubelet[2385]: E0113 20:38:22.413402 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:22.512294 containerd[1905]: time="2025-01-13T20:38:22.510197671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:22.512294 containerd[1905]: time="2025-01-13T20:38:22.510591676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:22.512294 containerd[1905]: time="2025-01-13T20:38:22.510646937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:22.512294 containerd[1905]: time="2025-01-13T20:38:22.510913749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:22.572051 systemd[1]: Started cri-containerd-0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c.scope - libcontainer container 0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c. Jan 13 20:38:22.637445 containerd[1905]: time="2025-01-13T20:38:22.637336212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c87659da-5935-4153-aaef-6810fb3156ad,Namespace:default,Attempt:0,} returns sandbox id \"0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c\"" Jan 13 20:38:22.641869 containerd[1905]: time="2025-01-13T20:38:22.641757319Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:38:23.416922 kubelet[2385]: E0113 20:38:23.416779 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:23.686419 systemd-networkd[1776]: cali60e51b789ff: Gained IPv6LL Jan 13 20:38:24.375486 kubelet[2385]: E0113 20:38:24.375444 2385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:24.423668 kubelet[2385]: E0113 20:38:24.423611 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:25.424684 kubelet[2385]: E0113 20:38:25.424642 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:26.201349 ntpd[1875]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 20:38:26.201771 ntpd[1875]: 13 Jan 20:38:26 ntpd[1875]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 20:38:26.425772 kubelet[2385]: E0113 20:38:26.425727 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:26.751034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608690832.mount: Deactivated successfully. Jan 13 20:38:27.433161 kubelet[2385]: E0113 20:38:27.426291 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:28.426436 kubelet[2385]: E0113 20:38:28.426397 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:29.149540 containerd[1905]: time="2025-01-13T20:38:29.149486842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:29.150794 containerd[1905]: time="2025-01-13T20:38:29.150746224Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 20:38:29.158607 containerd[1905]: time="2025-01-13T20:38:29.158549933Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:29.165927 containerd[1905]: time="2025-01-13T20:38:29.165869419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:29.168278 containerd[1905]: time="2025-01-13T20:38:29.168230094Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.526423429s" Jan 13 20:38:29.168278 containerd[1905]: time="2025-01-13T20:38:29.168273167Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:38:29.170292 containerd[1905]: time="2025-01-13T20:38:29.170261146Z" level=info msg="CreateContainer within sandbox \"0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:38:29.188788 containerd[1905]: time="2025-01-13T20:38:29.188745224Z" level=info msg="CreateContainer within sandbox \"0dc62d1a8552089c0e8af78129533d810256670d72e9c4ddee06cd2c98e1405c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"04e8b473a5bf51e5d8b2717ecff7c68ad2b99731b6bc27e9acc0962cb30d8c98\"" Jan 13 20:38:29.197779 containerd[1905]: time="2025-01-13T20:38:29.197725702Z" level=info msg="StartContainer for \"04e8b473a5bf51e5d8b2717ecff7c68ad2b99731b6bc27e9acc0962cb30d8c98\"" Jan 13 20:38:29.260966 systemd[1]: run-containerd-runc-k8s.io-04e8b473a5bf51e5d8b2717ecff7c68ad2b99731b6bc27e9acc0962cb30d8c98-runc.lAZDLe.mount: Deactivated successfully. Jan 13 20:38:29.269028 systemd[1]: Started cri-containerd-04e8b473a5bf51e5d8b2717ecff7c68ad2b99731b6bc27e9acc0962cb30d8c98.scope - libcontainer container 04e8b473a5bf51e5d8b2717ecff7c68ad2b99731b6bc27e9acc0962cb30d8c98. Jan 13 20:38:29.302994 containerd[1905]: time="2025-01-13T20:38:29.302708887Z" level=info msg="StartContainer for \"04e8b473a5bf51e5d8b2717ecff7c68ad2b99731b6bc27e9acc0962cb30d8c98\" returns successfully" Jan 13 20:38:29.427116 kubelet[2385]: E0113 20:38:29.426987 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:30.073026 kubelet[2385]: I0113 20:38:30.072986 2385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.545653062 podStartE2EDuration="9.072850338s" podCreationTimestamp="2025-01-13 20:38:21 +0000 UTC" firstStartedPulling="2025-01-13 20:38:22.641381868 +0000 UTC m=+38.663530715" lastFinishedPulling="2025-01-13 20:38:29.16857914 +0000 UTC m=+45.190727991" observedRunningTime="2025-01-13 20:38:30.072821911 +0000 UTC m=+46.094970778" watchObservedRunningTime="2025-01-13 20:38:30.072850338 +0000 UTC m=+46.094999195" Jan 13 20:38:30.427734 kubelet[2385]: E0113 20:38:30.427677 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:31.428601 kubelet[2385]: E0113 20:38:31.428552 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:32.428764 kubelet[2385]: E0113 20:38:32.428723 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:33.429963 kubelet[2385]: E0113 20:38:33.429906 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:34.430121 kubelet[2385]: E0113 20:38:34.430069 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:35.430782 kubelet[2385]: E0113 20:38:35.430732 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:36.431225 kubelet[2385]: E0113 20:38:36.431170 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:37.431663 kubelet[2385]: E0113 20:38:37.431607 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:38.432424 kubelet[2385]: E0113 20:38:38.432373 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:39.433116 kubelet[2385]: E0113 20:38:39.433056 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:40.433297 kubelet[2385]: E0113 20:38:40.433243 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:41.434207 kubelet[2385]: E0113 20:38:41.434150 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:42.435283 kubelet[2385]: E0113 20:38:42.435242 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:43.435646 kubelet[2385]: E0113 20:38:43.435592 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:44.376304 kubelet[2385]: E0113 20:38:44.376255 2385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:44.419671 containerd[1905]: time="2025-01-13T20:38:44.419622008Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:44.420407 containerd[1905]: time="2025-01-13T20:38:44.419759897Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:44.420407 containerd[1905]: time="2025-01-13T20:38:44.419776245Z" level=info msg="StopPodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:44.425154 containerd[1905]: time="2025-01-13T20:38:44.425106188Z" level=info msg="RemovePodSandbox for \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:44.435973 containerd[1905]: time="2025-01-13T20:38:44.435926356Z" level=info msg="Forcibly stopping sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\"" Jan 13 20:38:44.436756 kubelet[2385]: E0113 20:38:44.436707 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:44.443764 containerd[1905]: time="2025-01-13T20:38:44.436054133Z" level=info msg="TearDown network for sandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" successfully" Jan 13 20:38:44.455286 containerd[1905]: time="2025-01-13T20:38:44.455224974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.455454 containerd[1905]: time="2025-01-13T20:38:44.455340151Z" level=info msg="RemovePodSandbox \"074ec1518c9ab3ebf7ff06a83ba79d8a9af0b3caafdf6e854ee68d67f2e97b51\" returns successfully" Jan 13 20:38:44.455891 containerd[1905]: time="2025-01-13T20:38:44.455854070Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:44.456000 containerd[1905]: time="2025-01-13T20:38:44.455975375Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:44.456203 containerd[1905]: time="2025-01-13T20:38:44.455997060Z" level=info msg="StopPodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:44.456302 containerd[1905]: time="2025-01-13T20:38:44.456277848Z" level=info msg="RemovePodSandbox for \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:44.456381 containerd[1905]: time="2025-01-13T20:38:44.456308067Z" level=info msg="Forcibly stopping sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\"" Jan 13 20:38:44.456456 containerd[1905]: time="2025-01-13T20:38:44.456384516Z" level=info msg="TearDown network for sandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" successfully" Jan 13 20:38:44.459080 containerd[1905]: time="2025-01-13T20:38:44.459047227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.459254 containerd[1905]: time="2025-01-13T20:38:44.459107446Z" level=info msg="RemovePodSandbox \"79b7a7a012bf25b30655a7c63c78120828f3bce59fc473da5c1f3fe30e35fa21\" returns successfully" Jan 13 20:38:44.459451 containerd[1905]: time="2025-01-13T20:38:44.459426950Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:44.459548 containerd[1905]: time="2025-01-13T20:38:44.459527544Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:44.459630 containerd[1905]: time="2025-01-13T20:38:44.459547096Z" level=info msg="StopPodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:44.460763 containerd[1905]: time="2025-01-13T20:38:44.459872306Z" level=info msg="RemovePodSandbox for \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:44.460763 containerd[1905]: time="2025-01-13T20:38:44.459904022Z" level=info msg="Forcibly stopping sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\"" Jan 13 20:38:44.460763 containerd[1905]: time="2025-01-13T20:38:44.459976726Z" level=info msg="TearDown network for sandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" successfully" Jan 13 20:38:44.462834 containerd[1905]: time="2025-01-13T20:38:44.462766751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.462834 containerd[1905]: time="2025-01-13T20:38:44.462830089Z" level=info msg="RemovePodSandbox \"9133822f56b31ae28856c4a10c6dee08b57421b918793cfbbde8a1c103af80fb\" returns successfully" Jan 13 20:38:44.463329 containerd[1905]: time="2025-01-13T20:38:44.463294761Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:44.463465 containerd[1905]: time="2025-01-13T20:38:44.463441178Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:44.463586 containerd[1905]: time="2025-01-13T20:38:44.463463132Z" level=info msg="StopPodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:44.463814 containerd[1905]: time="2025-01-13T20:38:44.463773682Z" level=info msg="RemovePodSandbox for \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:44.463888 containerd[1905]: time="2025-01-13T20:38:44.463814340Z" level=info msg="Forcibly stopping sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\"" Jan 13 20:38:44.463962 containerd[1905]: time="2025-01-13T20:38:44.463905836Z" level=info msg="TearDown network for sandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" successfully" Jan 13 20:38:44.466622 containerd[1905]: time="2025-01-13T20:38:44.466593115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.466722 containerd[1905]: time="2025-01-13T20:38:44.466638302Z" level=info msg="RemovePodSandbox \"b3d6b5dd34f82788c3f70bf01673ca5f3225c7bd2dff26b0f7120a12b60aa13b\" returns successfully" Jan 13 20:38:44.475281 containerd[1905]: time="2025-01-13T20:38:44.475227949Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:44.475427 containerd[1905]: time="2025-01-13T20:38:44.475373391Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:44.475427 containerd[1905]: time="2025-01-13T20:38:44.475390913Z" level=info msg="StopPodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:44.476019 containerd[1905]: time="2025-01-13T20:38:44.475961261Z" level=info msg="RemovePodSandbox for \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:44.476135 containerd[1905]: time="2025-01-13T20:38:44.476019713Z" level=info msg="Forcibly stopping sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\"" Jan 13 20:38:44.476182 containerd[1905]: time="2025-01-13T20:38:44.476103156Z" level=info msg="TearDown network for sandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" successfully" Jan 13 20:38:44.478944 containerd[1905]: time="2025-01-13T20:38:44.478898470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.479122 containerd[1905]: time="2025-01-13T20:38:44.478974795Z" level=info msg="RemovePodSandbox \"96bcad32fe60316a2ee935a8f62a3e1b8a28362f05f614c1e8f43fff7c98a4e7\" returns successfully" Jan 13 20:38:44.479458 containerd[1905]: time="2025-01-13T20:38:44.479429929Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:44.479578 containerd[1905]: time="2025-01-13T20:38:44.479549267Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:44.479578 containerd[1905]: time="2025-01-13T20:38:44.479569149Z" level=info msg="StopPodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:44.480002 containerd[1905]: time="2025-01-13T20:38:44.479975456Z" level=info msg="RemovePodSandbox for \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:44.480075 containerd[1905]: time="2025-01-13T20:38:44.480005950Z" level=info msg="Forcibly stopping sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\"" Jan 13 20:38:44.480219 containerd[1905]: time="2025-01-13T20:38:44.480094593Z" level=info msg="TearDown network for sandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" successfully" Jan 13 20:38:44.482667 containerd[1905]: time="2025-01-13T20:38:44.482629830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.483036 containerd[1905]: time="2025-01-13T20:38:44.482684928Z" level=info msg="RemovePodSandbox \"a67f58d6faa5b8c89ca4012f18ab847330812ca383610f725e1f2f3d7b503b82\" returns successfully" Jan 13 20:38:44.483228 containerd[1905]: time="2025-01-13T20:38:44.483195273Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:44.483354 containerd[1905]: time="2025-01-13T20:38:44.483320837Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:44.483435 containerd[1905]: time="2025-01-13T20:38:44.483352164Z" level=info msg="StopPodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:44.483674 containerd[1905]: time="2025-01-13T20:38:44.483642404Z" level=info msg="RemovePodSandbox for \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:44.483876 containerd[1905]: time="2025-01-13T20:38:44.483674330Z" level=info msg="Forcibly stopping sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\"" Jan 13 20:38:44.483876 containerd[1905]: time="2025-01-13T20:38:44.483750294Z" level=info msg="TearDown network for sandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" successfully" Jan 13 20:38:44.486461 containerd[1905]: time="2025-01-13T20:38:44.486423799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.486556 containerd[1905]: time="2025-01-13T20:38:44.486470718Z" level=info msg="RemovePodSandbox \"d28fad2596387831910ae929b2260af6f23ab4a3a5f479b6964cf1397ab3181a\" returns successfully" Jan 13 20:38:44.486871 containerd[1905]: time="2025-01-13T20:38:44.486842238Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:44.486955 containerd[1905]: time="2025-01-13T20:38:44.486934235Z" level=info msg="TearDown network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" successfully" Jan 13 20:38:44.486955 containerd[1905]: time="2025-01-13T20:38:44.486952481Z" level=info msg="StopPodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" returns successfully" Jan 13 20:38:44.487289 containerd[1905]: time="2025-01-13T20:38:44.487269437Z" level=info msg="RemovePodSandbox for \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:44.487364 containerd[1905]: time="2025-01-13T20:38:44.487295126Z" level=info msg="Forcibly stopping sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\"" Jan 13 20:38:44.487422 containerd[1905]: time="2025-01-13T20:38:44.487369684Z" level=info msg="TearDown network for sandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" successfully" Jan 13 20:38:44.490253 containerd[1905]: time="2025-01-13T20:38:44.490222507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.490253 containerd[1905]: time="2025-01-13T20:38:44.490272350Z" level=info msg="RemovePodSandbox \"fe41dba35e643fb762d8595f32b74379eb160fd770ee7b5b2df1708311b033a7\" returns successfully" Jan 13 20:38:44.490654 containerd[1905]: time="2025-01-13T20:38:44.490631440Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" Jan 13 20:38:44.490749 containerd[1905]: time="2025-01-13T20:38:44.490725033Z" level=info msg="TearDown network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" successfully" Jan 13 20:38:44.491022 containerd[1905]: time="2025-01-13T20:38:44.490747849Z" level=info msg="StopPodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" returns successfully" Jan 13 20:38:44.492078 containerd[1905]: time="2025-01-13T20:38:44.491172432Z" level=info msg="RemovePodSandbox for \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" Jan 13 20:38:44.492078 containerd[1905]: time="2025-01-13T20:38:44.491201545Z" level=info msg="Forcibly stopping sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\"" Jan 13 20:38:44.492078 containerd[1905]: time="2025-01-13T20:38:44.491284517Z" level=info msg="TearDown network for sandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" successfully" Jan 13 20:38:44.493755 containerd[1905]: time="2025-01-13T20:38:44.493632041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.493755 containerd[1905]: time="2025-01-13T20:38:44.493683086Z" level=info msg="RemovePodSandbox \"eb17950cc9c96b8e568db96274f7c59eb6be21e7182768be1dca27d1fe41e90d\" returns successfully" Jan 13 20:38:44.494125 containerd[1905]: time="2025-01-13T20:38:44.494106194Z" level=info msg="StopPodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\"" Jan 13 20:38:44.494219 containerd[1905]: time="2025-01-13T20:38:44.494199064Z" level=info msg="TearDown network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" successfully" Jan 13 20:38:44.494271 containerd[1905]: time="2025-01-13T20:38:44.494219528Z" level=info msg="StopPodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" returns successfully" Jan 13 20:38:44.494581 containerd[1905]: time="2025-01-13T20:38:44.494554118Z" level=info msg="RemovePodSandbox for \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\"" Jan 13 20:38:44.494644 containerd[1905]: time="2025-01-13T20:38:44.494579458Z" level=info msg="Forcibly stopping sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\"" Jan 13 20:38:44.494713 containerd[1905]: time="2025-01-13T20:38:44.494658298Z" level=info msg="TearDown network for sandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" successfully" Jan 13 20:38:44.497181 containerd[1905]: time="2025-01-13T20:38:44.497152529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.497181 containerd[1905]: time="2025-01-13T20:38:44.497199624Z" level=info msg="RemovePodSandbox \"28dcc492b06f4b3e09a936f697f9a4de1c3c55f51ad405ac7e8df21c0b928493\" returns successfully" Jan 13 20:38:44.497704 containerd[1905]: time="2025-01-13T20:38:44.497679891Z" level=info msg="StopPodSandbox for \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\"" Jan 13 20:38:44.497790 containerd[1905]: time="2025-01-13T20:38:44.497774192Z" level=info msg="TearDown network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\" successfully" Jan 13 20:38:44.497873 containerd[1905]: time="2025-01-13T20:38:44.497789703Z" level=info msg="StopPodSandbox for \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\" returns successfully" Jan 13 20:38:44.498117 containerd[1905]: time="2025-01-13T20:38:44.498078510Z" level=info msg="RemovePodSandbox for \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\"" Jan 13 20:38:44.498117 containerd[1905]: time="2025-01-13T20:38:44.498106231Z" level=info msg="Forcibly stopping sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\"" Jan 13 20:38:44.498249 containerd[1905]: time="2025-01-13T20:38:44.498180922Z" level=info msg="TearDown network for sandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\" successfully" Jan 13 20:38:44.501210 containerd[1905]: time="2025-01-13T20:38:44.500881155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.501210 containerd[1905]: time="2025-01-13T20:38:44.500955764Z" level=info msg="RemovePodSandbox \"e32505e8edc8132de5387724e895107f8f9267e292efebdcd7965231cabac80f\" returns successfully" Jan 13 20:38:44.503284 containerd[1905]: time="2025-01-13T20:38:44.501527113Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:44.503284 containerd[1905]: time="2025-01-13T20:38:44.501608493Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:44.503284 containerd[1905]: time="2025-01-13T20:38:44.501656524Z" level=info msg="StopPodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:44.503696 containerd[1905]: time="2025-01-13T20:38:44.503672454Z" level=info msg="RemovePodSandbox for \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:44.503793 containerd[1905]: time="2025-01-13T20:38:44.503702171Z" level=info msg="Forcibly stopping sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\"" Jan 13 20:38:44.503860 containerd[1905]: time="2025-01-13T20:38:44.503780426Z" level=info msg="TearDown network for sandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" successfully" Jan 13 20:38:44.514235 containerd[1905]: time="2025-01-13T20:38:44.514008639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.514235 containerd[1905]: time="2025-01-13T20:38:44.514113652Z" level=info msg="RemovePodSandbox \"1b8174eea0195b9e38bca9c628e1cbceaa463e960db3da39c355bb4c1b7a4090\" returns successfully" Jan 13 20:38:44.515667 containerd[1905]: time="2025-01-13T20:38:44.515633741Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:44.515924 containerd[1905]: time="2025-01-13T20:38:44.515896039Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:44.516256 containerd[1905]: time="2025-01-13T20:38:44.516023477Z" level=info msg="StopPodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:44.525526 containerd[1905]: time="2025-01-13T20:38:44.525484323Z" level=info msg="RemovePodSandbox for \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:44.525691 containerd[1905]: time="2025-01-13T20:38:44.525670768Z" level=info msg="Forcibly stopping sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\"" Jan 13 20:38:44.525903 containerd[1905]: time="2025-01-13T20:38:44.525833981Z" level=info msg="TearDown network for sandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" successfully" Jan 13 20:38:44.533932 containerd[1905]: time="2025-01-13T20:38:44.533736485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.533932 containerd[1905]: time="2025-01-13T20:38:44.533821005Z" level=info msg="RemovePodSandbox \"5335e9261ebd74ed976f2061761ab16c100745bb9b5c6920642cc354cfdf8903\" returns successfully" Jan 13 20:38:44.534731 containerd[1905]: time="2025-01-13T20:38:44.534547643Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:44.534731 containerd[1905]: time="2025-01-13T20:38:44.534655965Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:44.534731 containerd[1905]: time="2025-01-13T20:38:44.534667542Z" level=info msg="StopPodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:44.535054 containerd[1905]: time="2025-01-13T20:38:44.535026678Z" level=info msg="RemovePodSandbox for \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:44.535114 containerd[1905]: time="2025-01-13T20:38:44.535054419Z" level=info msg="Forcibly stopping sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\"" Jan 13 20:38:44.535184 containerd[1905]: time="2025-01-13T20:38:44.535132673Z" level=info msg="TearDown network for sandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" successfully" Jan 13 20:38:44.537727 containerd[1905]: time="2025-01-13T20:38:44.537686470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.537843 containerd[1905]: time="2025-01-13T20:38:44.537735378Z" level=info msg="RemovePodSandbox \"2374e31ae2768ecd83eafe33d203317c28ef0a8b6f8b5638754a0972bf230068\" returns successfully" Jan 13 20:38:44.538310 containerd[1905]: time="2025-01-13T20:38:44.538098613Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:44.538310 containerd[1905]: time="2025-01-13T20:38:44.538172912Z" level=info msg="TearDown network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" successfully" Jan 13 20:38:44.538310 containerd[1905]: time="2025-01-13T20:38:44.538182339Z" level=info msg="StopPodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" returns successfully" Jan 13 20:38:44.538626 containerd[1905]: time="2025-01-13T20:38:44.538565485Z" level=info msg="RemovePodSandbox for \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:44.538717 containerd[1905]: time="2025-01-13T20:38:44.538608417Z" level=info msg="Forcibly stopping sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\"" Jan 13 20:38:44.539550 containerd[1905]: time="2025-01-13T20:38:44.538768346Z" level=info msg="TearDown network for sandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" successfully" Jan 13 20:38:44.542910 containerd[1905]: time="2025-01-13T20:38:44.542877690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.543481 containerd[1905]: time="2025-01-13T20:38:44.543394088Z" level=info msg="RemovePodSandbox \"653c5eb762d3334505f0b22b319e58bb6fdbed61d7ed67c432b00b7ef9888869\" returns successfully" Jan 13 20:38:44.543696 containerd[1905]: time="2025-01-13T20:38:44.543662568Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" Jan 13 20:38:44.543784 containerd[1905]: time="2025-01-13T20:38:44.543767280Z" level=info msg="TearDown network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" successfully" Jan 13 20:38:44.543876 containerd[1905]: time="2025-01-13T20:38:44.543785589Z" level=info msg="StopPodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" returns successfully" Jan 13 20:38:44.544159 containerd[1905]: time="2025-01-13T20:38:44.544113764Z" level=info msg="RemovePodSandbox for \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" Jan 13 20:38:44.544159 containerd[1905]: time="2025-01-13T20:38:44.544143206Z" level=info msg="Forcibly stopping sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\"" Jan 13 20:38:44.544265 containerd[1905]: time="2025-01-13T20:38:44.544224135Z" level=info msg="TearDown network for sandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" successfully" Jan 13 20:38:44.548474 containerd[1905]: time="2025-01-13T20:38:44.547942810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.548474 containerd[1905]: time="2025-01-13T20:38:44.547999838Z" level=info msg="RemovePodSandbox \"95501e1454fb74aa31cd8acf8cc68d11dccc01ce1558fa745e493924f5e613b0\" returns successfully" Jan 13 20:38:44.549522 containerd[1905]: time="2025-01-13T20:38:44.549496983Z" level=info msg="StopPodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\"" Jan 13 20:38:44.549715 containerd[1905]: time="2025-01-13T20:38:44.549695471Z" level=info msg="TearDown network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" successfully" Jan 13 20:38:44.549933 containerd[1905]: time="2025-01-13T20:38:44.549914741Z" level=info msg="StopPodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" returns successfully" Jan 13 20:38:44.550310 containerd[1905]: time="2025-01-13T20:38:44.550290431Z" level=info msg="RemovePodSandbox for \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\"" Jan 13 20:38:44.551383 containerd[1905]: time="2025-01-13T20:38:44.550373706Z" level=info msg="Forcibly stopping sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\"" Jan 13 20:38:44.551383 containerd[1905]: time="2025-01-13T20:38:44.550433191Z" level=info msg="TearDown network for sandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" successfully" Jan 13 20:38:44.553522 containerd[1905]: time="2025-01-13T20:38:44.553490467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.553710 containerd[1905]: time="2025-01-13T20:38:44.553677841Z" level=info msg="RemovePodSandbox \"7eaec02d2de86707c62bb637cdf99ef2cff3ade266f2d76899da015b47983de1\" returns successfully" Jan 13 20:38:44.554370 containerd[1905]: time="2025-01-13T20:38:44.554346052Z" level=info msg="StopPodSandbox for \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\"" Jan 13 20:38:44.554736 containerd[1905]: time="2025-01-13T20:38:44.554715163Z" level=info msg="TearDown network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\" successfully" Jan 13 20:38:44.554882 containerd[1905]: time="2025-01-13T20:38:44.554862682Z" level=info msg="StopPodSandbox for \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\" returns successfully" Jan 13 20:38:44.555443 containerd[1905]: time="2025-01-13T20:38:44.555419977Z" level=info msg="RemovePodSandbox for \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\"" Jan 13 20:38:44.555839 containerd[1905]: time="2025-01-13T20:38:44.555547683Z" level=info msg="Forcibly stopping sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\"" Jan 13 20:38:44.555839 containerd[1905]: time="2025-01-13T20:38:44.555663486Z" level=info msg="TearDown network for sandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\" successfully" Jan 13 20:38:44.558404 containerd[1905]: time="2025-01-13T20:38:44.558364976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:38:44.558486 containerd[1905]: time="2025-01-13T20:38:44.558413969Z" level=info msg="RemovePodSandbox \"308e725ba2b39d8798174ec2497321439f1cde5d598dac4f754632561d57299b\" returns successfully" Jan 13 20:38:45.437882 kubelet[2385]: E0113 20:38:45.437832 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:46.438733 kubelet[2385]: E0113 20:38:46.438599 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:47.439120 kubelet[2385]: E0113 20:38:47.439065 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:48.440063 kubelet[2385]: E0113 20:38:48.440012 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:49.440588 kubelet[2385]: E0113 20:38:49.440537 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:50.440708 kubelet[2385]: E0113 20:38:50.440651 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:51.441861 kubelet[2385]: E0113 20:38:51.441796 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:52.442100 kubelet[2385]: E0113 20:38:52.442045 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:53.208139 systemd[1]: run-containerd-runc-k8s.io-f78774266631a6cdad57a3d4016089774ca9b84922d2e4798c107ac559d8d304-runc.5yySzr.mount: Deactivated successfully. Jan 13 20:38:53.442271 kubelet[2385]: E0113 20:38:53.442194 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:54.292403 kubelet[2385]: I0113 20:38:54.292215 2385 topology_manager.go:215] "Topology Admit Handler" podUID="af99560e-1362-4fd5-80f6-66832fed18a5" podNamespace="default" podName="test-pod-1" Jan 13 20:38:54.301638 systemd[1]: Created slice kubepods-besteffort-podaf99560e_1362_4fd5_80f6_66832fed18a5.slice - libcontainer container kubepods-besteffort-podaf99560e_1362_4fd5_80f6_66832fed18a5.slice. Jan 13 20:38:54.371740 kubelet[2385]: I0113 20:38:54.371701 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmcn4\" (UniqueName: \"kubernetes.io/projected/af99560e-1362-4fd5-80f6-66832fed18a5-kube-api-access-tmcn4\") pod \"test-pod-1\" (UID: \"af99560e-1362-4fd5-80f6-66832fed18a5\") " pod="default/test-pod-1" Jan 13 20:38:54.371948 kubelet[2385]: I0113 20:38:54.371821 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2c5f27b-6ceb-43a5-aa86-9c949d5d0220\" (UniqueName: \"kubernetes.io/nfs/af99560e-1362-4fd5-80f6-66832fed18a5-pvc-d2c5f27b-6ceb-43a5-aa86-9c949d5d0220\") pod \"test-pod-1\" (UID: \"af99560e-1362-4fd5-80f6-66832fed18a5\") " pod="default/test-pod-1" Jan 13 20:38:54.442415 kubelet[2385]: E0113 20:38:54.442363 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:54.554845 kernel: FS-Cache: Loaded Jan 13 20:38:54.747777 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:38:54.747910 kernel: RPC: Registered udp transport module. Jan 13 20:38:54.747934 kernel: RPC: Registered tcp transport module. Jan 13 20:38:54.747950 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:38:54.747966 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:38:55.200844 kernel: NFS: Registering the id_resolver key type Jan 13 20:38:55.201001 kernel: Key type id_resolver registered Jan 13 20:38:55.201033 kernel: Key type id_legacy registered Jan 13 20:38:55.248616 nfsidmap[4569]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:38:55.255862 nfsidmap[4570]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:38:55.442849 kubelet[2385]: E0113 20:38:55.442787 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:55.506363 containerd[1905]: time="2025-01-13T20:38:55.506240868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af99560e-1362-4fd5-80f6-66832fed18a5,Namespace:default,Attempt:0,}" Jan 13 20:38:55.746740 (udev-worker)[4566]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:55.749579 systemd-networkd[1776]: cali5ec59c6bf6e: Link UP Jan 13 20:38:55.752018 systemd-networkd[1776]: cali5ec59c6bf6e: Gained carrier Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.598 [INFO][4572] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.143-k8s-test--pod--1-eth0 default af99560e-1362-4fd5-80f6-66832fed18a5 1339 0 2025-01-13 20:38:22 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.143 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.598 [INFO][4572] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.668 [INFO][4583] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" HandleID="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Workload="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.682 [INFO][4583] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" HandleID="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Workload="172.31.25.143-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcd60), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.143", "pod":"test-pod-1", "timestamp":"2025-01-13 20:38:55.668840353 +0000 UTC"}, Hostname:"172.31.25.143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.682 [INFO][4583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.682 [INFO][4583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.682 [INFO][4583] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.143' Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.685 [INFO][4583] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.691 [INFO][4583] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.697 [INFO][4583] ipam/ipam.go 489: Trying affinity for 192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.701 [INFO][4583] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.704 [INFO][4583] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.705 [INFO][4583] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.709 [INFO][4583] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27 Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.715 [INFO][4583] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.727 [INFO][4583] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.68/26] block=192.168.52.64/26 handle="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.727 [INFO][4583] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.68/26] handle="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" host="172.31.25.143" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.727 [INFO][4583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.727 [INFO][4583] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.68/26] IPv6=[] ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" HandleID="k8s-pod-network.8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Workload="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.773922 containerd[1905]: 2025-01-13 20:38:55.735 [INFO][4572] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"af99560e-1362-4fd5-80f6-66832fed18a5", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:55.775394 containerd[1905]: 2025-01-13 20:38:55.739 [INFO][4572] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.68/32] ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.775394 containerd[1905]: 2025-01-13 20:38:55.739 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.775394 containerd[1905]: 2025-01-13 20:38:55.754 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.775394 containerd[1905]: 2025-01-13 20:38:55.755 [INFO][4572] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.143-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"af99560e-1362-4fd5-80f6-66832fed18a5", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.143", ContainerID:"8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"fe:68:7d:54:01:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:38:55.775394 containerd[1905]: 2025-01-13 20:38:55.769 [INFO][4572] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.143-k8s-test--pod--1-eth0" Jan 13 20:38:55.812274 containerd[1905]: time="2025-01-13T20:38:55.811764117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:55.812274 containerd[1905]: time="2025-01-13T20:38:55.812021690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:55.812274 containerd[1905]: time="2025-01-13T20:38:55.812042926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:55.813819 containerd[1905]: time="2025-01-13T20:38:55.812207961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:55.857160 systemd[1]: Started cri-containerd-8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27.scope - libcontainer container 8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27. Jan 13 20:38:55.913061 containerd[1905]: time="2025-01-13T20:38:55.912998088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af99560e-1362-4fd5-80f6-66832fed18a5,Namespace:default,Attempt:0,} returns sandbox id \"8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27\"" Jan 13 20:38:55.918103 containerd[1905]: time="2025-01-13T20:38:55.917846112Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:38:56.247877 containerd[1905]: time="2025-01-13T20:38:56.247793837Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:56.248904 containerd[1905]: time="2025-01-13T20:38:56.248848917Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:38:56.263573 containerd[1905]: time="2025-01-13T20:38:56.261028277Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 343.135082ms" Jan 13 20:38:56.263573 containerd[1905]: time="2025-01-13T20:38:56.261082764Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:38:56.281105 containerd[1905]: time="2025-01-13T20:38:56.281054587Z" level=info msg="CreateContainer within sandbox \"8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:38:56.311300 containerd[1905]: time="2025-01-13T20:38:56.311253089Z" level=info msg="CreateContainer within sandbox \"8bd040ca748b82b6c43596ede340174d1ea24bbfe9a933c88d6a5f1c019b2b27\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e073a9dbc62b3bb3fc1e405fd63d0fd70f4197fbf2c54687e25fefc638fa6fbe\"" Jan 13 20:38:56.312063 containerd[1905]: time="2025-01-13T20:38:56.312030128Z" level=info msg="StartContainer for \"e073a9dbc62b3bb3fc1e405fd63d0fd70f4197fbf2c54687e25fefc638fa6fbe\"" Jan 13 20:38:56.349003 systemd[1]: Started cri-containerd-e073a9dbc62b3bb3fc1e405fd63d0fd70f4197fbf2c54687e25fefc638fa6fbe.scope - libcontainer container e073a9dbc62b3bb3fc1e405fd63d0fd70f4197fbf2c54687e25fefc638fa6fbe. Jan 13 20:38:56.402501 containerd[1905]: time="2025-01-13T20:38:56.402451189Z" level=info msg="StartContainer for \"e073a9dbc62b3bb3fc1e405fd63d0fd70f4197fbf2c54687e25fefc638fa6fbe\" returns successfully" Jan 13 20:38:56.444038 kubelet[2385]: E0113 20:38:56.443984 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:57.444941 kubelet[2385]: E0113 20:38:57.444871 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:57.670533 systemd-networkd[1776]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:38:58.446160 kubelet[2385]: E0113 20:38:58.445998 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:38:59.446476 kubelet[2385]: E0113 20:38:59.446419 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:00.201455 ntpd[1875]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 20:39:00.201910 ntpd[1875]: 13 Jan 20:39:00 ntpd[1875]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 20:39:00.447564 kubelet[2385]: E0113 20:39:00.447503 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:01.448525 kubelet[2385]: E0113 20:39:01.448466 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:02.448893 kubelet[2385]: E0113 20:39:02.448619 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:03.449520 kubelet[2385]: E0113 20:39:03.449437 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:04.375685 kubelet[2385]: E0113 20:39:04.375633 2385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:04.450062 kubelet[2385]: E0113 20:39:04.450004 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:05.450593 kubelet[2385]: E0113 20:39:05.450540 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:06.451321 kubelet[2385]: E0113 20:39:06.451271 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:07.451812 kubelet[2385]: E0113 20:39:07.451741 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:08.452647 kubelet[2385]: E0113 20:39:08.452600 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:09.453690 kubelet[2385]: E0113 20:39:09.453630 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:10.454748 kubelet[2385]: E0113 20:39:10.454690 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:11.455148 kubelet[2385]: E0113 20:39:11.455092 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:12.456042 kubelet[2385]: E0113 20:39:12.455976 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:13.456762 kubelet[2385]: E0113 20:39:13.456704 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:14.457762 kubelet[2385]: E0113 20:39:14.457710 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:15.458103 kubelet[2385]: E0113 20:39:15.458049 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:16.459251 kubelet[2385]: E0113 20:39:16.459198 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:17.460271 kubelet[2385]: E0113 20:39:17.460093 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:18.460594 kubelet[2385]: E0113 20:39:18.460523 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:19.461562 kubelet[2385]: E0113 20:39:19.461506 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:20.461946 kubelet[2385]: E0113 20:39:20.461892 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:21.462867 kubelet[2385]: E0113 20:39:21.462783 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:22.463956 kubelet[2385]: E0113 20:39:22.463902 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:23.464963 kubelet[2385]: E0113 20:39:23.464907 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:24.376131 kubelet[2385]: E0113 20:39:24.376038 2385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:24.465588 kubelet[2385]: E0113 20:39:24.465532 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:25.466664 kubelet[2385]: E0113 20:39:25.466613 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:26.252276 kubelet[2385]: E0113 20:39:26.252188 2385 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 13 20:39:26.468812 kubelet[2385]: E0113 20:39:26.468508 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:27.469159 kubelet[2385]: E0113 20:39:27.469107 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:28.469587 kubelet[2385]: E0113 20:39:28.469270 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:29.470663 kubelet[2385]: E0113 20:39:29.470427 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:30.471746 kubelet[2385]: E0113 20:39:30.471689 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:31.472820 kubelet[2385]: E0113 20:39:31.472744 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:32.473378 kubelet[2385]: E0113 20:39:32.473323 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:33.473889 kubelet[2385]: E0113 20:39:33.473834 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:34.474289 kubelet[2385]: E0113 20:39:34.474234 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:35.475032 kubelet[2385]: E0113 20:39:35.474977 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:36.253267 kubelet[2385]: E0113 20:39:36.253194 2385 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:39:36.302516 kubelet[2385]: E0113 20:39:36.301791 2385 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": unexpected EOF" Jan 13 20:39:36.319403 kubelet[2385]: E0113 20:39:36.318528 2385 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": read tcp 172.31.25.143:39722->172.31.21.35:6443: read: connection reset by peer" Jan 13 20:39:36.319403 kubelet[2385]: E0113 20:39:36.319262 2385 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" Jan 13 20:39:36.319403 kubelet[2385]: I0113 20:39:36.319321 2385 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 13 20:39:36.324414 kubelet[2385]: E0113 20:39:36.324374 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" interval="200ms" Jan 13 20:39:36.475374 kubelet[2385]: E0113 20:39:36.475326 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:36.526282 kubelet[2385]: E0113 20:39:36.526169 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" interval="400ms" Jan 13 20:39:36.928084 kubelet[2385]: E0113 20:39:36.928044 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" interval="800ms" Jan 13 20:39:37.223706 kubelet[2385]: E0113 20:39:37.223582 2385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.25.143\": Get \"https://172.31.21.35:6443/api/v1/nodes/172.31.25.143?resourceVersion=0&timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" Jan 13 20:39:37.224246 kubelet[2385]: E0113 20:39:37.224218 2385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.25.143\": Get \"https://172.31.21.35:6443/api/v1/nodes/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" Jan 13 20:39:37.224777 kubelet[2385]: E0113 20:39:37.224748 2385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.25.143\": Get \"https://172.31.21.35:6443/api/v1/nodes/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" Jan 13 20:39:37.225287 kubelet[2385]: E0113 20:39:37.225262 2385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.25.143\": Get \"https://172.31.21.35:6443/api/v1/nodes/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" Jan 13 20:39:37.225787 kubelet[2385]: E0113 20:39:37.225756 2385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.25.143\": Get \"https://172.31.21.35:6443/api/v1/nodes/172.31.25.143?timeout=10s\": dial tcp 172.31.21.35:6443: connect: connection refused" Jan 13 20:39:37.225787 kubelet[2385]: E0113 20:39:37.225777 2385 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Jan 13 20:39:37.475937 kubelet[2385]: E0113 20:39:37.475801 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:38.477074 kubelet[2385]: E0113 20:39:38.476934 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:39.477147 kubelet[2385]: E0113 20:39:39.477086 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:40.477292 kubelet[2385]: E0113 20:39:40.477243 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:41.477818 kubelet[2385]: E0113 20:39:41.477747 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:42.478154 kubelet[2385]: E0113 20:39:42.478095 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:43.479179 kubelet[2385]: E0113 20:39:43.479125 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:44.375685 kubelet[2385]: E0113 20:39:44.375629 2385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:44.479903 kubelet[2385]: E0113 20:39:44.479857 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:45.480962 kubelet[2385]: E0113 20:39:45.480902 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:46.481853 kubelet[2385]: E0113 20:39:46.481787 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:47.482666 kubelet[2385]: E0113 20:39:47.482608 2385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:39:47.729710 kubelet[2385]: E0113 20:39:47.729660 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.143?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s"