Mar 17 17:54:52.007872 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:54:52.007914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:54:52.007931 kernel: BIOS-provided physical RAM map: Mar 17 17:54:52.007943 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:54:52.007955 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:54:52.007966 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:54:52.007984 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Mar 17 17:54:52.007997 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Mar 17 17:54:52.008009 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Mar 17 17:54:52.008022 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:54:52.008034 kernel: NX (Execute Disable) protection: active Mar 17 17:54:52.008047 kernel: APIC: Static calls initialized Mar 17 17:54:52.008059 kernel: SMBIOS 2.7 present. Mar 17 17:54:52.008072 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 17 17:54:52.008171 kernel: Hypervisor detected: KVM Mar 17 17:54:52.008187 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:54:52.008201 kernel: kvm-clock: using sched offset of 8453239857 cycles Mar 17 17:54:52.008216 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:54:52.008230 kernel: tsc: Detected 2499.990 MHz processor Mar 17 17:54:52.008244 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:54:52.008259 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:54:52.010313 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Mar 17 17:54:52.010337 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:54:52.010352 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:54:52.010367 kernel: Using GB pages for direct mapping Mar 17 17:54:52.010381 kernel: ACPI: Early table checksum verification disabled Mar 17 17:54:52.010395 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Mar 17 17:54:52.010411 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Mar 17 17:54:52.010425 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:54:52.010440 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 17 17:54:52.010460 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Mar 17 17:54:52.010474 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 17 17:54:52.010489 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:54:52.010503 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 17 17:54:52.010515 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:54:52.010529 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 17 17:54:52.010543 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 17 17:54:52.010557 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 17 17:54:52.010571 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Mar 17 17:54:52.010589 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Mar 17 17:54:52.010610 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Mar 17 17:54:52.010624 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Mar 17 17:54:52.010685 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Mar 17 17:54:52.010700 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Mar 17 17:54:52.010718 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Mar 17 17:54:52.010733 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Mar 17 17:54:52.010748 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Mar 17 17:54:52.010763 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Mar 17 17:54:52.011064 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:54:52.011086 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:54:52.011101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 17 17:54:52.011116 kernel: NUMA: Initialized distance table, cnt=1 Mar 17 17:54:52.011131 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Mar 17 17:54:52.011151 kernel: Zone ranges: Mar 17 17:54:52.011166 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:54:52.011181 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Mar 17 17:54:52.011196 kernel: Normal empty Mar 17 17:54:52.011210 kernel: Movable zone start for each node Mar 17 17:54:52.011225 kernel: Early memory node ranges Mar 17 17:54:52.011240 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:54:52.011255 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Mar 17 17:54:52.011282 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Mar 17 17:54:52.011295 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:54:52.011310 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:54:52.011325 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Mar 17 17:54:52.011340 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 17 17:54:52.011355 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:54:52.011369 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 17 17:54:52.011384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:54:52.011399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:54:52.011414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:54:52.011429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:54:52.011447 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:54:52.011462 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:54:52.011476 kernel: TSC deadline timer available Mar 17 17:54:52.011491 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:54:52.011507 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:54:52.011521 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 17 17:54:52.011536 kernel: Booting paravirtualized kernel on KVM Mar 17 17:54:52.011551 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:54:52.011566 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:54:52.011585 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:54:52.011600 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:54:52.011614 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:54:52.011629 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:54:52.011644 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:54:52.011661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:54:52.011677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:54:52.011691 kernel: random: crng init done Mar 17 17:54:52.011709 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:54:52.011724 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:54:52.011739 kernel: Fallback order for Node 0: 0 Mar 17 17:54:52.011754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Mar 17 17:54:52.011768 kernel: Policy zone: DMA32 Mar 17 17:54:52.011783 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:54:52.011799 kernel: Memory: 1930296K/2057760K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 127204K reserved, 0K cma-reserved) Mar 17 17:54:52.011814 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:54:52.011828 kernel: Kernel/User page tables isolation: enabled Mar 17 17:54:52.011846 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:54:52.011861 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:54:52.011876 kernel: Dynamic Preempt: voluntary Mar 17 17:54:52.011891 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:54:52.011907 kernel: rcu: RCU event tracing is enabled. Mar 17 17:54:52.011923 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:54:52.011938 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:54:52.011952 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:54:52.011967 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:54:52.011985 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:54:52.012000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:54:52.012014 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:54:52.012030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:54:52.012045 kernel: Console: colour VGA+ 80x25 Mar 17 17:54:52.012059 kernel: printk: console [ttyS0] enabled Mar 17 17:54:52.012074 kernel: ACPI: Core revision 20230628 Mar 17 17:54:52.012140 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 17 17:54:52.012158 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:54:52.012177 kernel: x2apic enabled Mar 17 17:54:52.012193 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:54:52.012300 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2409306edf7, max_idle_ns: 440795259305 ns Mar 17 17:54:52.012321 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499990) Mar 17 17:54:52.012368 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 17:54:52.012385 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 17:54:52.012400 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:54:52.012415 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:54:52.012562 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:54:52.012583 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:54:52.012630 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 17 17:54:52.012646 kernel: RETBleed: Vulnerable Mar 17 17:54:52.012773 kernel: Speculative Store Bypass: Vulnerable Mar 17 17:54:52.012794 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:54:52.012810 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:54:52.012825 kernel: GDS: Unknown: Dependent on hypervisor status Mar 17 17:54:52.012841 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:54:52.012857 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:54:52.012873 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:54:52.012893 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 17 17:54:52.012909 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 17 17:54:52.012925 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 17 17:54:52.012941 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 17 17:54:52.012957 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 17 17:54:52.012979 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 17 17:54:52.012995 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:54:52.013011 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 17 17:54:52.013026 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 17 17:54:52.013144 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 17 17:54:52.013161 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 17 17:54:52.013236 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 17 17:54:52.013252 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 17 17:54:52.015293 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 17 17:54:52.015323 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:54:52.015340 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:54:52.015356 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:54:52.015372 kernel: landlock: Up and running. Mar 17 17:54:52.015388 kernel: SELinux: Initializing. Mar 17 17:54:52.015404 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:54:52.015420 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:54:52.015436 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 17 17:54:52.015458 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:54:52.015475 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:54:52.015491 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:54:52.015507 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 17 17:54:52.015523 kernel: signal: max sigframe size: 3632 Mar 17 17:54:52.015540 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:54:52.015557 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:54:52.015572 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:54:52.015589 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:54:52.015608 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:54:52.015624 kernel: .... node #0, CPUs: #1 Mar 17 17:54:52.015641 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 17 17:54:52.015659 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 17:54:52.015674 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:54:52.015691 kernel: smpboot: Max logical packages: 1 Mar 17 17:54:52.015707 kernel: smpboot: Total of 2 processors activated (9999.96 BogoMIPS) Mar 17 17:54:52.015723 kernel: devtmpfs: initialized Mar 17 17:54:52.015739 kernel: x86/mm: Memory block size: 128MB Mar 17 17:54:52.015760 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:54:52.015776 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:54:52.015792 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:54:52.015808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:54:52.015824 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:54:52.015840 kernel: audit: type=2000 audit(1742234090.568:1): state=initialized audit_enabled=0 res=1 Mar 17 17:54:52.015856 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:54:52.015873 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:54:52.015889 kernel: cpuidle: using governor menu Mar 17 17:54:52.015908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:54:52.015924 kernel: dca service started, version 1.12.1 Mar 17 17:54:52.015939 kernel: PCI: Using configuration type 1 for base access Mar 17 17:54:52.015954 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:54:52.015969 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:54:52.015985 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:54:52.016001 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:54:52.016017 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:54:52.016034 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:54:52.016053 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:54:52.016068 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:54:52.016083 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:54:52.016159 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 17 17:54:52.016177 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:54:52.016193 kernel: ACPI: Interpreter enabled Mar 17 17:54:52.016209 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:54:52.016225 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:54:52.016241 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:54:52.016260 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:54:52.017707 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 17 17:54:52.017726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:54:52.017947 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:54:52.018084 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:54:52.018216 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:54:52.018233 kernel: acpiphp: Slot [3] registered Mar 17 17:54:52.018252 kernel: acpiphp: Slot [4] registered Mar 17 17:54:52.018266 kernel: acpiphp: Slot [5] registered Mar 17 17:54:52.018313 kernel: acpiphp: Slot [6] registered Mar 17 17:54:52.018326 kernel: acpiphp: Slot [7] registered Mar 17 17:54:52.018341 kernel: acpiphp: Slot [8] registered Mar 17 17:54:52.018354 kernel: acpiphp: Slot [9] registered Mar 17 17:54:52.018367 kernel: acpiphp: Slot [10] registered Mar 17 17:54:52.018381 kernel: acpiphp: Slot [11] registered Mar 17 17:54:52.018395 kernel: acpiphp: Slot [12] registered Mar 17 17:54:52.018411 kernel: acpiphp: Slot [13] registered Mar 17 17:54:52.018425 kernel: acpiphp: Slot [14] registered Mar 17 17:54:52.018438 kernel: acpiphp: Slot [15] registered Mar 17 17:54:52.018452 kernel: acpiphp: Slot [16] registered Mar 17 17:54:52.018465 kernel: acpiphp: Slot [17] registered Mar 17 17:54:52.018478 kernel: acpiphp: Slot [18] registered Mar 17 17:54:52.018491 kernel: acpiphp: Slot [19] registered Mar 17 17:54:52.018504 kernel: acpiphp: Slot [20] registered Mar 17 17:54:52.018517 kernel: acpiphp: Slot [21] registered Mar 17 17:54:52.018530 kernel: acpiphp: Slot [22] registered Mar 17 17:54:52.018546 kernel: acpiphp: Slot [23] registered Mar 17 17:54:52.018560 kernel: acpiphp: Slot [24] registered Mar 17 17:54:52.018574 kernel: acpiphp: Slot [25] registered Mar 17 17:54:52.018586 kernel: acpiphp: Slot [26] registered Mar 17 17:54:52.018599 kernel: acpiphp: Slot [27] registered Mar 17 17:54:52.018612 kernel: acpiphp: Slot [28] registered Mar 17 17:54:52.018626 kernel: acpiphp: Slot [29] registered Mar 17 17:54:52.018639 kernel: acpiphp: Slot [30] registered Mar 17 17:54:52.018652 kernel: acpiphp: Slot [31] registered Mar 17 17:54:52.018668 kernel: PCI host bridge to bus 0000:00 Mar 17 17:54:52.018807 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:54:52.018935 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:54:52.019058 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:54:52.019182 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:54:52.020376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:54:52.020554 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:54:52.020719 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:54:52.020873 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 17 17:54:52.021024 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 17 17:54:52.023361 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Mar 17 17:54:52.024964 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 17 17:54:52.025190 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 17 17:54:52.025434 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 17 17:54:52.025639 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 17 17:54:52.025785 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 17 17:54:52.026122 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 17 17:54:52.026484 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 17 17:54:52.026636 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Mar 17 17:54:52.026776 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:54:52.026916 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:54:52.027161 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:54:52.027422 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Mar 17 17:54:52.027583 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:54:52.027721 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Mar 17 17:54:52.027739 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:54:52.027755 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:54:52.027772 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:54:52.027794 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:54:52.027810 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:54:52.027826 kernel: iommu: Default domain type: Translated Mar 17 17:54:52.027842 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:54:52.027857 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:54:52.027874 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:54:52.027891 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:54:52.027906 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Mar 17 17:54:52.028048 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 17 17:54:52.028359 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 17 17:54:52.028533 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:54:52.028557 kernel: vgaarb: loaded Mar 17 17:54:52.028577 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 17 17:54:52.028596 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 17 17:54:52.028614 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:54:52.028633 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:54:52.028716 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:54:52.028757 kernel: pnp: PnP ACPI init Mar 17 17:54:52.028777 kernel: pnp: PnP ACPI: found 5 devices Mar 17 17:54:52.028797 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:54:52.028816 kernel: NET: Registered PF_INET protocol family Mar 17 17:54:52.028835 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:54:52.028854 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:54:52.028873 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:54:52.028892 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:54:52.028910 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:54:52.028933 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:54:52.028951 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:54:52.028978 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:54:52.028997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:54:52.029093 kernel: NET: Registered PF_XDP protocol family Mar 17 17:54:52.029484 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:54:52.029627 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:54:52.029751 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:54:52.029879 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:54:52.030022 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:54:52.030043 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:54:52.030059 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:54:52.030075 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2409306edf7, max_idle_ns: 440795259305 ns Mar 17 17:54:52.030091 kernel: clocksource: Switched to clocksource tsc Mar 17 17:54:52.030107 kernel: Initialise system trusted keyrings Mar 17 17:54:52.030122 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:54:52.030141 kernel: Key type asymmetric registered Mar 17 17:54:52.030157 kernel: Asymmetric key parser 'x509' registered Mar 17 17:54:52.030172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:54:52.030187 kernel: io scheduler mq-deadline registered Mar 17 17:54:52.030202 kernel: io scheduler kyber registered Mar 17 17:54:52.030217 kernel: io scheduler bfq registered Mar 17 17:54:52.030233 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:54:52.030248 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:54:52.030263 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:54:52.030307 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:54:52.030323 kernel: i8042: Warning: Keylock active Mar 17 17:54:52.030338 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:54:52.030353 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:54:52.030601 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 17 17:54:52.030735 kernel: rtc_cmos 00:00: registered as rtc0 Mar 17 17:54:52.030864 kernel: rtc_cmos 00:00: setting system clock to 2025-03-17T17:54:51 UTC (1742234091) Mar 17 17:54:52.030991 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 17 17:54:52.031015 kernel: intel_pstate: CPU model not supported Mar 17 17:54:52.031030 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:54:52.031045 kernel: Segment Routing with IPv6 Mar 17 17:54:52.031061 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:54:52.031076 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:54:52.031092 kernel: Key type dns_resolver registered Mar 17 17:54:52.031106 kernel: IPI shorthand broadcast: enabled Mar 17 17:54:52.031122 kernel: sched_clock: Marking stable (537004935, 226657201)->(891728415, -128066279) Mar 17 17:54:52.031137 kernel: registered taskstats version 1 Mar 17 17:54:52.031155 kernel: Loading compiled-in X.509 certificates Mar 17 17:54:52.031171 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:54:52.031186 kernel: Key type .fscrypt registered Mar 17 17:54:52.031201 kernel: Key type fscrypt-provisioning registered Mar 17 17:54:52.031217 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:54:52.031232 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:54:52.031247 kernel: ima: No architecture policies found Mar 17 17:54:52.031262 kernel: clk: Disabling unused clocks Mar 17 17:54:52.031294 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:54:52.031314 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:54:52.031330 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:54:52.031345 kernel: Run /init as init process Mar 17 17:54:52.031361 kernel: with arguments: Mar 17 17:54:52.031376 kernel: /init Mar 17 17:54:52.031392 kernel: with environment: Mar 17 17:54:52.031407 kernel: HOME=/ Mar 17 17:54:52.031422 kernel: TERM=linux Mar 17 17:54:52.031437 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:54:52.031460 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:54:52.031494 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:54:52.031516 systemd[1]: Detected virtualization amazon. Mar 17 17:54:52.031532 systemd[1]: Detected architecture x86-64. Mar 17 17:54:52.031548 systemd[1]: Running in initrd. Mar 17 17:54:52.031568 systemd[1]: No hostname configured, using default hostname. Mar 17 17:54:52.031585 systemd[1]: Hostname set to . Mar 17 17:54:52.031602 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:54:52.031619 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:54:52.031636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:54:52.031653 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:54:52.031671 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:54:52.031688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:54:52.031708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:54:52.031726 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:54:52.031744 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:54:52.031761 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:54:52.031778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:54:52.031795 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:54:52.031815 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:54:52.031832 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:54:52.031850 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:54:52.031866 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:54:52.031883 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:54:52.031900 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:54:52.031917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:54:52.031934 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:54:52.031951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:54:52.031970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:54:52.031987 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:54:52.032004 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:54:52.032021 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:54:52.032037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:54:52.032054 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:54:52.032071 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:54:52.032091 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:54:52.032143 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:54:52.032161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:52.032178 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:54:52.032196 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:54:52.032217 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:54:52.032264 systemd-journald[179]: Collecting audit messages is disabled. Mar 17 17:54:52.032316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:54:52.032338 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:54:52.032362 systemd-journald[179]: Journal started Mar 17 17:54:52.032395 systemd-journald[179]: Runtime Journal (/run/log/journal/ec27e308e8f0b0b8ff2b935f40b8e49d) is 4.8M, max 38.5M, 33.7M free. Mar 17 17:54:52.039316 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:54:52.047322 systemd-modules-load[180]: Inserted module 'overlay' Mar 17 17:54:52.173611 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:54:52.173652 kernel: Bridge firewalling registered Mar 17 17:54:52.055550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:54:52.087954 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 17 17:54:52.172174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:54:52.172782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:54:52.189511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:54:52.193889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:54:52.198440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:52.202526 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:54:52.212460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:54:52.223609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:54:52.227880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:52.238765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:54:52.255550 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:52.265607 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:54:52.289664 dracut-cmdline[216]: dracut-dracut-053 Mar 17 17:54:52.294468 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:54:52.321554 systemd-resolved[207]: Positive Trust Anchors: Mar 17 17:54:52.322898 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:54:52.324743 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:54:52.338579 systemd-resolved[207]: Defaulting to hostname 'linux'. Mar 17 17:54:52.340494 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:54:52.342815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:54:52.396344 kernel: SCSI subsystem initialized Mar 17 17:54:52.408307 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:54:52.421294 kernel: iscsi: registered transport (tcp) Mar 17 17:54:52.446312 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:54:52.446386 kernel: QLogic iSCSI HBA Driver Mar 17 17:54:52.490971 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:54:52.497577 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:54:52.537294 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:54:52.537378 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:54:52.537399 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:54:52.582302 kernel: raid6: avx512x4 gen() 15915 MB/s Mar 17 17:54:52.599338 kernel: raid6: avx512x2 gen() 14285 MB/s Mar 17 17:54:52.617303 kernel: raid6: avx512x1 gen() 13990 MB/s Mar 17 17:54:52.634307 kernel: raid6: avx2x4 gen() 13949 MB/s Mar 17 17:54:52.653307 kernel: raid6: avx2x2 gen() 8657 MB/s Mar 17 17:54:52.674464 kernel: raid6: avx2x1 gen() 4151 MB/s Mar 17 17:54:52.674544 kernel: raid6: using algorithm avx512x4 gen() 15915 MB/s Mar 17 17:54:52.696751 kernel: raid6: .... xor() 1343 MB/s, rmw enabled Mar 17 17:54:52.696840 kernel: raid6: using avx512x2 recovery algorithm Mar 17 17:54:52.731309 kernel: xor: automatically using best checksumming function avx Mar 17 17:54:52.919931 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:54:52.933690 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:54:52.940519 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:54:52.966245 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 17 17:54:52.973310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:54:52.982932 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:54:53.008595 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Mar 17 17:54:53.045771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:54:53.053569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:54:53.127649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:54:53.136626 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:54:53.174374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:54:53.178186 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:54:53.180809 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:54:53.182432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:54:53.209134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:54:53.247914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:54:53.276288 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:54:53.291339 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:54:53.291413 kernel: AES CTR mode by8 optimization enabled Mar 17 17:54:53.313531 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:54:53.337817 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:54:53.338028 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 17 17:54:53.338203 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ec:14:0b:f7:7f Mar 17 17:54:53.323952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:54:53.324110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:53.325832 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:54:53.327219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:54:53.327450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:53.328877 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:53.335597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:53.337296 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:54:53.338750 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:54:53.373713 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:54:53.373972 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:54:53.383309 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:54:53.390294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:54:53.390357 kernel: GPT:9289727 != 16777215 Mar 17 17:54:53.390379 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:54:53.390401 kernel: GPT:9289727 != 16777215 Mar 17 17:54:53.390420 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:54:53.390441 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:54:53.507116 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (449) Mar 17 17:54:53.523309 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (455) Mar 17 17:54:53.579596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:53.594111 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:54:53.689591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:54:53.693557 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:53.709427 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:54:53.726497 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:54:53.727954 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:54:53.794136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:54:53.814847 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:54:53.827667 disk-uuid[627]: Primary Header is updated. Mar 17 17:54:53.827667 disk-uuid[627]: Secondary Entries is updated. Mar 17 17:54:53.827667 disk-uuid[627]: Secondary Header is updated. Mar 17 17:54:53.833296 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:54:53.840355 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:54:54.839490 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:54:54.839865 disk-uuid[628]: The operation has completed successfully. Mar 17 17:54:55.027360 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:54:55.027485 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:54:55.076563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:54:55.093068 sh[886]: Success Mar 17 17:54:55.116476 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:54:55.228989 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:54:55.239440 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:54:55.242235 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:54:55.310719 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:54:55.310789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:55.310809 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:54:55.310827 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:54:55.313384 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:54:55.441307 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:54:55.453846 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:54:55.457349 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:54:55.470885 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:54:55.477771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:54:55.509391 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:54:55.509471 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:55.509502 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:54:55.515306 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:54:55.531154 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:54:55.530545 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:54:55.540498 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:54:55.551594 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:54:55.615698 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:54:55.628576 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:54:55.675883 systemd-networkd[1079]: lo: Link UP Mar 17 17:54:55.675896 systemd-networkd[1079]: lo: Gained carrier Mar 17 17:54:55.679213 systemd-networkd[1079]: Enumeration completed Mar 17 17:54:55.679486 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:54:55.681337 systemd-networkd[1079]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:55.681343 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:54:55.681769 systemd[1]: Reached target network.target - Network. Mar 17 17:54:55.690642 systemd-networkd[1079]: eth0: Link UP Mar 17 17:54:55.690652 systemd-networkd[1079]: eth0: Gained carrier Mar 17 17:54:55.690669 systemd-networkd[1079]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:55.705873 systemd-networkd[1079]: eth0: DHCPv4 address 172.31.26.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:54:55.973575 ignition[1000]: Ignition 2.20.0 Mar 17 17:54:55.973670 ignition[1000]: Stage: fetch-offline Mar 17 17:54:55.974003 ignition[1000]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:55.974017 ignition[1000]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:55.975065 ignition[1000]: Ignition finished successfully Mar 17 17:54:55.980090 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:54:55.985499 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:54:56.045702 ignition[1090]: Ignition 2.20.0 Mar 17 17:54:56.045714 ignition[1090]: Stage: fetch Mar 17 17:54:56.046060 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:56.046073 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:56.047138 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:56.105721 ignition[1090]: PUT result: OK Mar 17 17:54:56.121431 ignition[1090]: parsed url from cmdline: "" Mar 17 17:54:56.121497 ignition[1090]: no config URL provided Mar 17 17:54:56.121509 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:54:56.121527 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:54:56.121555 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:56.124007 ignition[1090]: PUT result: OK Mar 17 17:54:56.124076 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:54:56.127827 ignition[1090]: GET result: OK Mar 17 17:54:56.129343 ignition[1090]: parsing config with SHA512: 78c9abcd4957a28bb18affee3c39f88ccb9c0dee051bda8eb977e5c290f47bf2f185222cf23e2be45328133412261f2e9720ce1cd11169c1631bf57c5143d28d Mar 17 17:54:56.143386 unknown[1090]: fetched base config from "system" Mar 17 17:54:56.143399 unknown[1090]: fetched base config from "system" Mar 17 17:54:56.143735 ignition[1090]: fetch: fetch complete Mar 17 17:54:56.143407 unknown[1090]: fetched user config from "aws" Mar 17 17:54:56.143743 ignition[1090]: fetch: fetch passed Mar 17 17:54:56.147400 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:54:56.145234 ignition[1090]: Ignition finished successfully Mar 17 17:54:56.162668 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:54:56.202994 ignition[1096]: Ignition 2.20.0 Mar 17 17:54:56.203009 ignition[1096]: Stage: kargs Mar 17 17:54:56.203997 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:56.204011 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:56.204770 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:56.209633 ignition[1096]: PUT result: OK Mar 17 17:54:56.219355 ignition[1096]: kargs: kargs passed Mar 17 17:54:56.219436 ignition[1096]: Ignition finished successfully Mar 17 17:54:56.221751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:54:56.227610 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:54:56.264854 ignition[1102]: Ignition 2.20.0 Mar 17 17:54:56.264875 ignition[1102]: Stage: disks Mar 17 17:54:56.265468 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:56.265485 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:56.265600 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:56.267035 ignition[1102]: PUT result: OK Mar 17 17:54:56.274551 ignition[1102]: disks: disks passed Mar 17 17:54:56.275910 ignition[1102]: Ignition finished successfully Mar 17 17:54:56.284046 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:54:56.284613 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:54:56.296019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:54:56.296191 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:54:56.300820 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:54:56.301140 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:54:56.309484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:54:56.372794 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:54:56.378097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:54:56.387706 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:54:56.572388 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:54:56.574215 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:54:56.576845 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:54:56.595456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:54:56.599363 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:54:56.602355 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:54:56.602437 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:54:56.602476 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:54:56.618412 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1130) Mar 17 17:54:56.622350 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:54:56.622397 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:56.622411 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:54:56.623227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:54:56.630464 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:54:56.637296 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:54:56.639596 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:54:56.993667 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:54:57.011587 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:54:57.017575 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:54:57.023295 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:54:57.300250 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:54:57.312808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:54:57.329545 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:54:57.350756 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:54:57.352890 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:54:57.378896 ignition[1242]: INFO : Ignition 2.20.0 Mar 17 17:54:57.378896 ignition[1242]: INFO : Stage: mount Mar 17 17:54:57.384537 ignition[1242]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:57.384537 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:57.384537 ignition[1242]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:57.389063 ignition[1242]: INFO : PUT result: OK Mar 17 17:54:57.392359 ignition[1242]: INFO : mount: mount passed Mar 17 17:54:57.393425 ignition[1242]: INFO : Ignition finished successfully Mar 17 17:54:57.396453 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:54:57.404482 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:54:57.411747 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:54:57.425655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:54:57.457304 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1254) Mar 17 17:54:57.458287 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:54:57.458349 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:57.460914 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:54:57.473496 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:54:57.475680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:54:57.502406 ignition[1271]: INFO : Ignition 2.20.0 Mar 17 17:54:57.502406 ignition[1271]: INFO : Stage: files Mar 17 17:54:57.504469 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:57.504469 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:57.507326 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:57.509400 ignition[1271]: INFO : PUT result: OK Mar 17 17:54:57.512399 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:54:57.514633 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:54:57.514633 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:54:57.532929 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:54:57.534494 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:54:57.536564 unknown[1271]: wrote ssh authorized keys file for user: core Mar 17 17:54:57.538166 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:54:57.569583 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:54:57.571501 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 17:54:57.641474 systemd-networkd[1079]: eth0: Gained IPv6LL Mar 17 17:54:57.926635 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:54:58.459538 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:54:58.461970 ignition[1271]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:54:58.461970 ignition[1271]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:54:58.461970 ignition[1271]: INFO : files: files passed Mar 17 17:54:58.461970 ignition[1271]: INFO : Ignition finished successfully Mar 17 17:54:58.468580 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:54:58.483531 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:54:58.492539 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:54:58.504572 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:54:58.505018 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:54:58.516967 initrd-setup-root-after-ignition[1299]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:54:58.516967 initrd-setup-root-after-ignition[1299]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:54:58.522399 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:54:58.527568 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:54:58.530907 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:54:58.540662 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:54:58.611575 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:54:58.611760 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:54:58.615076 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:54:58.623374 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:54:58.629111 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:54:58.638959 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:54:58.684675 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:54:58.693533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:54:58.721039 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:54:58.723130 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:54:58.724306 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:54:58.730251 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:54:58.731735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:54:58.735851 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:54:58.739925 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:54:58.742074 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:54:58.744016 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:54:58.750042 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:54:58.755074 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:54:58.756551 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:54:58.765491 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:54:58.773718 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:54:58.779420 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:54:58.784028 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:54:58.784227 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:54:58.799184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:54:58.804424 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:54:58.809604 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:54:58.810825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:54:58.814437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:54:58.817016 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:54:58.819735 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:54:58.819951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:54:58.836444 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:54:58.840705 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:54:58.847540 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:54:58.849693 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:54:58.850171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:54:58.866959 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:54:58.870320 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:54:58.870574 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:54:58.879697 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:54:58.879887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:54:58.903841 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:54:58.905724 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:54:58.914351 ignition[1323]: INFO : Ignition 2.20.0 Mar 17 17:54:58.914351 ignition[1323]: INFO : Stage: umount Mar 17 17:54:58.918855 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:58.918855 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:54:58.918855 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:54:58.924888 ignition[1323]: INFO : PUT result: OK Mar 17 17:54:58.927264 ignition[1323]: INFO : umount: umount passed Mar 17 17:54:58.927264 ignition[1323]: INFO : Ignition finished successfully Mar 17 17:54:58.929177 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:54:58.929323 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:54:58.945488 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:54:58.945602 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:54:58.947658 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:54:58.947723 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:54:58.953155 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:54:58.953222 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:54:58.957287 systemd[1]: Stopped target network.target - Network. Mar 17 17:54:58.964052 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:54:58.964136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:54:58.965451 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:54:58.966359 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:54:58.971439 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:54:58.974499 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:54:58.974962 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:54:58.985465 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:54:58.987799 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:54:58.989363 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:54:58.989498 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:54:58.992679 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:54:58.992778 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:54:58.995312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:54:58.997036 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:54:59.001946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:54:59.007887 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:54:59.016342 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:54:59.018605 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:54:59.018740 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:54:59.023366 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:54:59.023480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:54:59.027493 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:54:59.028617 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:54:59.033683 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:54:59.033978 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:54:59.034101 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:54:59.039894 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:54:59.046069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:54:59.046127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:54:59.056809 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:54:59.060466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:54:59.060575 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:54:59.062773 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:54:59.062850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:59.068712 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:54:59.069809 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:54:59.071160 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:54:59.071236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:54:59.075036 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:54:59.080244 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:54:59.081777 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:54:59.092229 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:54:59.092480 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:54:59.095961 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:54:59.097117 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:54:59.100747 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:54:59.100820 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:54:59.103102 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:54:59.103154 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:54:59.105617 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:54:59.105700 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:54:59.119393 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:54:59.119833 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:54:59.124625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:54:59.124726 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:59.141642 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:54:59.144453 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:54:59.144539 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:54:59.148867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:54:59.148952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:59.160952 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:54:59.161050 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:54:59.161488 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:54:59.161577 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:54:59.165080 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:54:59.174450 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:54:59.197056 systemd[1]: Switching root. Mar 17 17:54:59.237015 systemd-journald[179]: Journal stopped Mar 17 17:55:01.988785 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 17 17:55:01.988949 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:55:01.988986 kernel: SELinux: policy capability open_perms=1 Mar 17 17:55:01.989011 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:55:01.989033 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:55:01.989059 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:55:01.989090 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:55:01.989110 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:55:01.989130 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:55:01.989151 kernel: audit: type=1403 audit(1742234099.618:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:55:01.989178 systemd[1]: Successfully loaded SELinux policy in 86.669ms. Mar 17 17:55:01.989207 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.111ms. Mar 17 17:55:01.989236 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:55:01.989261 systemd[1]: Detected virtualization amazon. Mar 17 17:55:01.996551 systemd[1]: Detected architecture x86-64. Mar 17 17:55:01.996587 systemd[1]: Detected first boot. Mar 17 17:55:01.996609 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:55:01.996629 zram_generator::config[1367]: No configuration found. Mar 17 17:55:01.996653 kernel: Guest personality initialized and is inactive Mar 17 17:55:01.996683 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:55:01.996700 kernel: Initialized host personality Mar 17 17:55:01.996719 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:55:01.996815 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:55:01.996840 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:55:01.996906 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:55:01.996928 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:55:01.996948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:55:01.996969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:55:01.997001 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:55:01.997021 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:55:01.997041 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:55:01.997062 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:55:01.997081 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:55:01.997102 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:55:01.997119 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:55:01.997138 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:01.997164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:01.997186 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:55:01.997208 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:55:01.997228 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:55:01.997250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:55:01.997414 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:55:01.997443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:01.997466 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:55:01.997493 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:55:01.997516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:55:01.997540 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:55:01.997562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:01.997589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:55:01.997611 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:55:01.997631 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:55:01.997650 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:55:01.997671 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:55:01.997695 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:55:01.997715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:01.997734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:01.997754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:01.997774 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:55:01.997794 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:55:01.997814 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:55:01.997834 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:55:01.997854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:01.997876 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:55:01.997965 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:55:01.997986 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:55:01.998008 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:55:01.998028 systemd[1]: Reached target machines.target - Containers. Mar 17 17:55:01.998052 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:55:01.998074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:01.998097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:55:01.998121 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:55:01.998141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:01.998161 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:55:01.998182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:01.998208 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:55:01.998229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:01.998249 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:55:02.017385 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:55:02.018853 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:55:02.018914 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:55:02.018935 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:55:02.018958 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:02.018986 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:55:02.019011 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:55:02.019032 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:55:02.019056 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:55:02.019074 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:55:02.019097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:55:02.019117 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:55:02.019135 systemd[1]: Stopped verity-setup.service. Mar 17 17:55:02.019157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:02.019175 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:55:02.019199 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:55:02.019217 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:55:02.019236 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:55:02.019255 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:55:02.019297 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:55:02.019320 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:02.019339 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:55:02.019357 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:55:02.019377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:02.019396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:02.019414 kernel: loop: module loaded Mar 17 17:55:02.019437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:02.019458 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:02.019479 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:55:02.019502 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:02.019525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:55:02.019546 kernel: fuse: init (API version 7.39) Mar 17 17:55:02.019566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:55:02.019586 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:55:02.019608 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:55:02.019632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:55:02.019694 systemd-journald[1443]: Collecting audit messages is disabled. Mar 17 17:55:02.019738 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:55:02.019761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:02.019784 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:55:02.019806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:02.019828 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:55:02.019854 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:55:02.019878 systemd-journald[1443]: Journal started Mar 17 17:55:02.019919 systemd-journald[1443]: Runtime Journal (/run/log/journal/ec27e308e8f0b0b8ff2b935f40b8e49d) is 4.8M, max 38.5M, 33.7M free. Mar 17 17:55:01.070676 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:55:01.087513 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:55:01.088131 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:55:02.032320 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:55:02.056108 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:55:02.045531 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:55:02.048327 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:55:02.050360 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:02.050622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:02.052633 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:55:02.056008 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:55:02.062079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:55:02.064593 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:55:02.101390 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:55:02.150734 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:55:02.152162 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:55:02.163432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:55:02.204506 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:55:02.215551 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:55:02.217414 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:02.224610 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:55:02.226426 kernel: ACPI: bus type drm_connector registered Mar 17 17:55:02.226742 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:55:02.238386 kernel: loop0: detected capacity change from 0 to 147912 Mar 17 17:55:02.248609 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:55:02.272042 systemd-journald[1443]: Time spent on flushing to /var/log/journal/ec27e308e8f0b0b8ff2b935f40b8e49d is 72.309ms for 960 entries. Mar 17 17:55:02.272042 systemd-journald[1443]: System Journal (/var/log/journal/ec27e308e8f0b0b8ff2b935f40b8e49d) is 8M, max 195.6M, 187.6M free. Mar 17 17:55:02.352308 systemd-journald[1443]: Received client request to flush runtime journal. Mar 17 17:55:02.283756 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:55:02.285989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:55:02.305384 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:02.316895 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:02.330765 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:55:02.363057 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:55:02.382716 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:55:02.381964 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:55:02.384684 udevadm[1512]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:55:02.386906 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:55:02.440302 kernel: loop1: detected capacity change from 0 to 138176 Mar 17 17:55:02.504347 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:55:02.538326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:55:02.671138 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Mar 17 17:55:02.671171 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Mar 17 17:55:02.705039 kernel: loop2: detected capacity change from 0 to 205544 Mar 17 17:55:02.715880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:02.823841 kernel: loop3: detected capacity change from 0 to 62832 Mar 17 17:55:02.995387 kernel: loop4: detected capacity change from 0 to 147912 Mar 17 17:55:03.076301 kernel: loop5: detected capacity change from 0 to 138176 Mar 17 17:55:03.125776 kernel: loop6: detected capacity change from 0 to 205544 Mar 17 17:55:03.207173 kernel: loop7: detected capacity change from 0 to 62832 Mar 17 17:55:03.248969 (sd-merge)[1525]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:55:03.251196 (sd-merge)[1525]: Merged extensions into '/usr'. Mar 17 17:55:03.279961 systemd[1]: Reload requested from client PID 1473 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:55:03.281959 systemd[1]: Reloading... Mar 17 17:55:03.432305 zram_generator::config[1553]: No configuration found. Mar 17 17:55:03.705444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:03.865056 systemd[1]: Reloading finished in 580 ms. Mar 17 17:55:03.882358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:55:03.901303 systemd[1]: Starting ensure-sysext.service... Mar 17 17:55:03.920342 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:55:04.007198 systemd[1]: Reload requested from client PID 1601 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:55:04.007473 systemd[1]: Reloading... Mar 17 17:55:04.037409 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:55:04.038720 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:55:04.045321 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:55:04.045991 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Mar 17 17:55:04.046088 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Mar 17 17:55:04.066252 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:55:04.066288 systemd-tmpfiles[1602]: Skipping /boot Mar 17 17:55:04.109684 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:55:04.109705 systemd-tmpfiles[1602]: Skipping /boot Mar 17 17:55:04.266306 zram_generator::config[1632]: No configuration found. Mar 17 17:55:04.415892 ldconfig[1465]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:55:04.509453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:04.606057 systemd[1]: Reloading finished in 597 ms. Mar 17 17:55:04.623605 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:55:04.626288 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:55:04.643781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:04.660679 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:55:04.667370 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:55:04.675575 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:55:04.680789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:55:04.686582 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:04.702160 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:55:04.713836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:04.714130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:04.734017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:04.738029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:04.745712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:04.746992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:04.747189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:04.762425 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:55:04.763585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:04.765334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:04.765594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:04.787709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:04.788105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:04.799902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:04.800137 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Mar 17 17:55:04.802065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:04.802670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:04.802914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:04.810365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:04.811170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:04.815248 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:55:04.819502 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:04.820313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:04.830331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:04.843899 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:55:04.846435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:04.847092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:04.851987 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:04.855804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:04.869453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:04.878643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:55:04.883492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:04.885411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:04.885852 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:04.886297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:04.886961 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:55:04.903494 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:55:04.907432 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:04.929060 systemd[1]: Finished ensure-sysext.service. Mar 17 17:55:04.930717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:04.932513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:04.943780 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:55:04.954778 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:55:04.957415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:55:04.965647 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:04.966589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:04.970654 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:55:04.974051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:04.982590 augenrules[1733]: No rules Mar 17 17:55:04.984739 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:55:04.985050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:55:04.997362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:05.015494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:55:05.129764 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:55:05.137528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:55:05.244368 systemd-resolved[1688]: Positive Trust Anchors: Mar 17 17:55:05.244386 systemd-resolved[1688]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:55:05.244447 systemd-resolved[1688]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:55:05.260785 systemd-resolved[1688]: Defaulting to hostname 'linux'. Mar 17 17:55:05.260902 (udev-worker)[1744]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:55:05.267209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:55:05.268925 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:05.272685 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:55:05.274554 systemd-networkd[1743]: lo: Link UP Mar 17 17:55:05.274563 systemd-networkd[1743]: lo: Gained carrier Mar 17 17:55:05.275553 systemd-networkd[1743]: Enumeration completed Mar 17 17:55:05.275766 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:55:05.277881 systemd[1]: Reached target network.target - Network. Mar 17 17:55:05.289198 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:55:05.306390 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:55:05.368566 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:55:05.400254 systemd-networkd[1743]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:05.400286 systemd-networkd[1743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:55:05.403806 systemd-networkd[1743]: eth0: Link UP Mar 17 17:55:05.404023 systemd-networkd[1743]: eth0: Gained carrier Mar 17 17:55:05.404055 systemd-networkd[1743]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:05.414396 systemd-networkd[1743]: eth0: DHCPv4 address 172.31.26.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:55:05.475307 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1747) Mar 17 17:55:05.477303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:55:05.488387 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Mar 17 17:55:05.497055 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:55:05.540428 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:55:05.540532 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 17 17:55:05.549301 kernel: ACPI: button: Sleep Button [SLPF] Mar 17 17:55:05.754303 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:55:05.757240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:05.800549 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:55:05.836435 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:55:05.843344 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:55:05.847630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:55:05.869306 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:55:05.890992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:55:05.914885 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:55:05.915540 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:05.920755 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:55:05.930893 lvm[1864]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:55:05.962896 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:55:06.182846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:06.184617 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:55:06.189485 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:55:06.193242 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:55:06.195090 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:55:06.197712 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:55:06.199710 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:55:06.201400 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:55:06.201448 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:55:06.202577 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:55:06.205742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:55:06.209121 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:55:06.217908 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:55:06.220566 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:55:06.222529 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:55:06.232692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:55:06.235501 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:55:06.246442 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:55:06.248548 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:55:06.251511 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:55:06.254064 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:55:06.254100 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:55:06.263499 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:55:06.269515 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:55:06.276591 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:55:06.282481 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:55:06.293555 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:55:06.296504 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:55:06.310721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:55:06.321476 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:55:06.334515 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:55:06.355963 jq[1874]: false Mar 17 17:55:06.364539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:55:06.377742 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:55:06.387791 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:55:06.391576 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:55:06.394533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:55:06.399617 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:55:06.405567 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:55:06.409333 extend-filesystems[1875]: Found loop4 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found loop5 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found loop6 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found loop7 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found nvme0n1 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found nvme0n1p1 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found nvme0n1p2 Mar 17 17:55:06.409333 extend-filesystems[1875]: Found nvme0n1p3 Mar 17 17:55:06.420230 extend-filesystems[1875]: Found usr Mar 17 17:55:06.420230 extend-filesystems[1875]: Found nvme0n1p4 Mar 17 17:55:06.420230 extend-filesystems[1875]: Found nvme0n1p6 Mar 17 17:55:06.420230 extend-filesystems[1875]: Found nvme0n1p7 Mar 17 17:55:06.420230 extend-filesystems[1875]: Found nvme0n1p9 Mar 17 17:55:06.420230 extend-filesystems[1875]: Checking size of /dev/nvme0n1p9 Mar 17 17:55:06.414835 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:55:06.415105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:55:06.415983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:55:06.416171 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:55:06.523834 jq[1886]: true Mar 17 17:55:06.526832 (ntainerd)[1904]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:55:06.526714 dbus-daemon[1873]: [system] SELinux support is enabled Mar 17 17:55:06.531598 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:55:06.547907 extend-filesystems[1875]: Resized partition /dev/nvme0n1p9 Mar 17 17:55:06.540429 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:55:06.540471 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:55:06.542023 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:55:06.542111 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:55:06.550693 extend-filesystems[1912]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:55:06.565961 update_engine[1885]: I20250317 17:55:06.562990 1885 main.cc:92] Flatcar Update Engine starting Mar 17 17:55:06.564399 dbus-daemon[1873]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1743 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:55:06.596140 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:55:06.588453 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:55:06.590395 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:55:06.590675 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:55:06.598566 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:55:06.614614 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:55:06.623796 update_engine[1885]: I20250317 17:55:06.623535 1885 update_check_scheduler.cc:74] Next update check in 6m3s Mar 17 17:55:06.631518 ntpd[1877]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:20 UTC 2025 (1): Starting Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:20 UTC 2025 (1): Starting Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: ---------------------------------------------------- Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: corporation. Support and training for ntp-4 are Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: available at https://www.nwtime.org/support Mar 17 17:55:06.632871 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: ---------------------------------------------------- Mar 17 17:55:06.632237 ntpd[1877]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:55:06.632248 ntpd[1877]: ---------------------------------------------------- Mar 17 17:55:06.632259 ntpd[1877]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:55:06.632279 ntpd[1877]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:55:06.632298 ntpd[1877]: corporation. Support and training for ntp-4 are Mar 17 17:55:06.632308 ntpd[1877]: available at https://www.nwtime.org/support Mar 17 17:55:06.632317 ntpd[1877]: ---------------------------------------------------- Mar 17 17:55:06.639802 ntpd[1877]: proto: precision = 0.070 usec (-24) Mar 17 17:55:06.648425 jq[1910]: true Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: proto: precision = 0.070 usec (-24) Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: basedate set to 2025-03-05 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: gps base set to 2025-03-09 (week 2357) Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Listen normally on 3 eth0 172.31.26.100:123 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Listen normally on 4 lo [::1]:123 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: bind(21) AF_INET6 fe80::4ec:14ff:fe0b:f77f%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: unable to create socket on eth0 (5) for fe80::4ec:14ff:fe0b:f77f%2#123 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: failed to init interface for address fe80::4ec:14ff:fe0b:f77f%2 Mar 17 17:55:06.662008 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: Listening on routing socket on fd #21 for interface updates Mar 17 17:55:06.640820 ntpd[1877]: basedate set to 2025-03-05 Mar 17 17:55:06.640842 ntpd[1877]: gps base set to 2025-03-09 (week 2357) Mar 17 17:55:06.657701 ntpd[1877]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:55:06.657761 ntpd[1877]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:55:06.660855 ntpd[1877]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:55:06.660910 ntpd[1877]: Listen normally on 3 eth0 172.31.26.100:123 Mar 17 17:55:06.660963 ntpd[1877]: Listen normally on 4 lo [::1]:123 Mar 17 17:55:06.661032 ntpd[1877]: bind(21) AF_INET6 fe80::4ec:14ff:fe0b:f77f%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:55:06.661055 ntpd[1877]: unable to create socket on eth0 (5) for fe80::4ec:14ff:fe0b:f77f%2#123 Mar 17 17:55:06.661069 ntpd[1877]: failed to init interface for address fe80::4ec:14ff:fe0b:f77f%2 Mar 17 17:55:06.661111 ntpd[1877]: Listening on routing socket on fd #21 for interface updates Mar 17 17:55:06.670137 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:55:06.670187 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:55:06.670470 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:55:06.670470 ntpd[1877]: 17 Mar 17:55:06 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:55:06.681215 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:55:06.703873 systemd-logind[1884]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:55:06.704373 systemd-logind[1884]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 17 17:55:06.704403 systemd-logind[1884]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:55:06.707300 systemd-logind[1884]: New seat seat0. Mar 17 17:55:06.710361 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:55:06.730691 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:55:06.750375 extend-filesystems[1912]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:55:06.750375 extend-filesystems[1912]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:55:06.750375 extend-filesystems[1912]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:55:06.755501 extend-filesystems[1875]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:55:06.758149 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:55:06.760217 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:55:06.833780 bash[1946]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:55:06.839300 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1741) Mar 17 17:55:06.853557 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:55:06.858980 coreos-metadata[1872]: Mar 17 17:55:06.858 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:55:06.871643 systemd[1]: Starting sshkeys.service... Mar 17 17:55:06.882021 coreos-metadata[1872]: Mar 17 17:55:06.881 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:55:06.885374 coreos-metadata[1872]: Mar 17 17:55:06.885 INFO Fetch successful Mar 17 17:55:06.885505 coreos-metadata[1872]: Mar 17 17:55:06.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:55:06.891718 coreos-metadata[1872]: Mar 17 17:55:06.890 INFO Fetch successful Mar 17 17:55:06.891718 coreos-metadata[1872]: Mar 17 17:55:06.891 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:55:06.895720 coreos-metadata[1872]: Mar 17 17:55:06.893 INFO Fetch successful Mar 17 17:55:06.895720 coreos-metadata[1872]: Mar 17 17:55:06.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:55:06.898093 coreos-metadata[1872]: Mar 17 17:55:06.898 INFO Fetch successful Mar 17 17:55:06.898216 coreos-metadata[1872]: Mar 17 17:55:06.898 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:55:06.901306 coreos-metadata[1872]: Mar 17 17:55:06.899 INFO Fetch failed with 404: resource not found Mar 17 17:55:06.901306 coreos-metadata[1872]: Mar 17 17:55:06.899 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:55:06.907620 coreos-metadata[1872]: Mar 17 17:55:06.907 INFO Fetch successful Mar 17 17:55:06.907759 coreos-metadata[1872]: Mar 17 17:55:06.907 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:55:06.912521 coreos-metadata[1872]: Mar 17 17:55:06.912 INFO Fetch successful Mar 17 17:55:06.912642 coreos-metadata[1872]: Mar 17 17:55:06.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:55:06.914548 coreos-metadata[1872]: Mar 17 17:55:06.913 INFO Fetch successful Mar 17 17:55:06.914548 coreos-metadata[1872]: Mar 17 17:55:06.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:55:06.916827 coreos-metadata[1872]: Mar 17 17:55:06.915 INFO Fetch successful Mar 17 17:55:06.916827 coreos-metadata[1872]: Mar 17 17:55:06.915 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:55:06.922947 coreos-metadata[1872]: Mar 17 17:55:06.919 INFO Fetch successful Mar 17 17:55:06.992871 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:55:07.005510 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:55:07.093396 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:55:07.096392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:55:07.099445 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:55:07.105031 dbus-daemon[1873]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:55:07.106439 dbus-daemon[1873]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1917 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:55:07.120708 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:55:07.236768 polkitd[2002]: Started polkitd version 121 Mar 17 17:55:07.293349 polkitd[2002]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:55:07.293462 polkitd[2002]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:55:07.307668 polkitd[2002]: Finished loading, compiling and executing 2 rules Mar 17 17:55:07.314315 dbus-daemon[1873]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:55:07.318070 polkitd[2002]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:55:07.318375 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:55:07.331536 locksmithd[1919]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:55:07.387757 coreos-metadata[1969]: Mar 17 17:55:07.387 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:55:07.389330 coreos-metadata[1969]: Mar 17 17:55:07.389 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:55:07.390308 coreos-metadata[1969]: Mar 17 17:55:07.390 INFO Fetch successful Mar 17 17:55:07.390308 coreos-metadata[1969]: Mar 17 17:55:07.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:55:07.396610 coreos-metadata[1969]: Mar 17 17:55:07.395 INFO Fetch successful Mar 17 17:55:07.399438 unknown[1969]: wrote ssh authorized keys file for user: core Mar 17 17:55:07.403504 systemd-hostnamed[1917]: Hostname set to (transient) Mar 17 17:55:07.404231 systemd-resolved[1688]: System hostname changed to 'ip-172-31-26-100'. Mar 17 17:55:07.431432 systemd-networkd[1743]: eth0: Gained IPv6LL Mar 17 17:55:07.437546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:55:07.441205 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:55:07.451735 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:55:07.462639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:07.472339 update-ssh-keys[2056]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:55:07.475566 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:55:07.479383 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:55:07.491104 systemd[1]: Finished sshkeys.service. Mar 17 17:55:07.502525 containerd[1904]: time="2025-03-17T17:55:07.502374028Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:55:07.602725 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:55:07.641489 amazon-ssm-agent[2062]: Initializing new seelog logger Mar 17 17:55:07.641489 amazon-ssm-agent[2062]: New Seelog Logger Creation Complete Mar 17 17:55:07.641489 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.641489 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.646332 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 processing appconfig overrides Mar 17 17:55:07.646332 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.646332 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.646332 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 processing appconfig overrides Mar 17 17:55:07.648335 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.648335 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.648450 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 processing appconfig overrides Mar 17 17:55:07.649188 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO Proxy environment variables: Mar 17 17:55:07.653200 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.653200 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:55:07.653362 amazon-ssm-agent[2062]: 2025/03/17 17:55:07 processing appconfig overrides Mar 17 17:55:07.663617 containerd[1904]: time="2025-03-17T17:55:07.663527022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.676722 containerd[1904]: time="2025-03-17T17:55:07.676660935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:07.676722 containerd[1904]: time="2025-03-17T17:55:07.676718703Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:55:07.676870 containerd[1904]: time="2025-03-17T17:55:07.676742852Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:55:07.676942 containerd[1904]: time="2025-03-17T17:55:07.676921533Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:55:07.677011 containerd[1904]: time="2025-03-17T17:55:07.676952094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677049 containerd[1904]: time="2025-03-17T17:55:07.677033750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677086 containerd[1904]: time="2025-03-17T17:55:07.677053551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677410 containerd[1904]: time="2025-03-17T17:55:07.677384616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677468 containerd[1904]: time="2025-03-17T17:55:07.677413502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677468 containerd[1904]: time="2025-03-17T17:55:07.677432962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677468 containerd[1904]: time="2025-03-17T17:55:07.677448880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.677582 containerd[1904]: time="2025-03-17T17:55:07.677550630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.678284 containerd[1904]: time="2025-03-17T17:55:07.677805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:07.678284 containerd[1904]: time="2025-03-17T17:55:07.677997027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:07.678284 containerd[1904]: time="2025-03-17T17:55:07.678015035Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:55:07.678284 containerd[1904]: time="2025-03-17T17:55:07.678100211Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:55:07.678284 containerd[1904]: time="2025-03-17T17:55:07.678151380Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:55:07.692102 containerd[1904]: time="2025-03-17T17:55:07.692052490Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:55:07.692221 containerd[1904]: time="2025-03-17T17:55:07.692133661Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:55:07.692221 containerd[1904]: time="2025-03-17T17:55:07.692169111Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:55:07.692221 containerd[1904]: time="2025-03-17T17:55:07.692190854Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:55:07.692221 containerd[1904]: time="2025-03-17T17:55:07.692211780Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692414848Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692751657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692864016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692885434Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692905760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692925917Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692945823Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.692985254Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.693007283Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.693028955Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.693047331Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.693190151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.693213285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:55:07.693297 containerd[1904]: time="2025-03-17T17:55:07.693298618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693319795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693336275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693352305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693370240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693388566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693408312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693427628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693446456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693468713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693487204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693505964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693524475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693545599Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693582723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.693813 containerd[1904]: time="2025-03-17T17:55:07.693603277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693620123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693672183Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693700694Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693718196Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693737916Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693752171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693770342Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693785140Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:55:07.694341 containerd[1904]: time="2025-03-17T17:55:07.693804080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:55:07.701445 containerd[1904]: time="2025-03-17T17:55:07.694219856Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:55:07.701445 containerd[1904]: time="2025-03-17T17:55:07.699685772Z" level=info msg="Connect containerd service" Mar 17 17:55:07.701445 containerd[1904]: time="2025-03-17T17:55:07.699842927Z" level=info msg="using legacy CRI server" Mar 17 17:55:07.701445 containerd[1904]: time="2025-03-17T17:55:07.699855892Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:55:07.701445 containerd[1904]: time="2025-03-17T17:55:07.700024185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:55:07.701923 containerd[1904]: time="2025-03-17T17:55:07.701501002Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:55:07.701977 containerd[1904]: time="2025-03-17T17:55:07.701947389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.703826602Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.703917286Z" level=info msg="Start subscribing containerd event" Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.703972075Z" level=info msg="Start recovering state" Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.704059097Z" level=info msg="Start event monitor" Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.704091199Z" level=info msg="Start snapshots syncer" Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.704104292Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:55:07.704365 containerd[1904]: time="2025-03-17T17:55:07.704115047Z" level=info msg="Start streaming server" Mar 17 17:55:07.704353 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:55:07.706890 containerd[1904]: time="2025-03-17T17:55:07.706220136Z" level=info msg="containerd successfully booted in 0.209270s" Mar 17 17:55:07.751082 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO https_proxy: Mar 17 17:55:07.850770 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO http_proxy: Mar 17 17:55:07.897956 sshd_keygen[1914]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:55:07.938963 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:55:07.949655 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:55:07.950775 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO no_proxy: Mar 17 17:55:07.961948 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:55:07.962224 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:55:07.969736 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:55:07.987283 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:55:07.994965 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:55:08.006125 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:55:08.007595 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:55:08.007595 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:55:08.007595 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO Agent will take identity from EC2 Mar 17 17:55:08.007595 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:55:08.007595 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:55:08.007595 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:55:08.008594 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [Registrar] Starting registrar module Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:07 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:55:08.009435 amazon-ssm-agent[2062]: 2025-03-17 17:55:08 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:55:08.049610 amazon-ssm-agent[2062]: 2025-03-17 17:55:08 INFO [CredentialRefresher] Next credential rotation will be in 31.024946932883335 minutes Mar 17 17:55:09.045387 amazon-ssm-agent[2062]: 2025-03-17 17:55:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:55:09.133501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:09.135069 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:09.136345 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:55:09.138915 systemd[1]: Startup finished in 687ms (kernel) + 7.862s (initrd) + 9.604s (userspace) = 18.154s. Mar 17 17:55:09.153463 amazon-ssm-agent[2062]: 2025-03-17 17:55:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2109) started Mar 17 17:55:09.276942 amazon-ssm-agent[2062]: 2025-03-17 17:55:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:55:09.633004 ntpd[1877]: Listen normally on 6 eth0 [fe80::4ec:14ff:fe0b:f77f%2]:123 Mar 17 17:55:09.633796 ntpd[1877]: 17 Mar 17:55:09 ntpd[1877]: Listen normally on 6 eth0 [fe80::4ec:14ff:fe0b:f77f%2]:123 Mar 17 17:55:10.131891 kubelet[2119]: E0317 17:55:10.131809 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:10.134576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:10.134781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:10.135162 systemd[1]: kubelet.service: Consumed 937ms CPU time, 234.4M memory peak. Mar 17 17:55:15.801756 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:55:15.817013 systemd[1]: Started sshd@0-172.31.26.100:22-139.178.89.65:56722.service - OpenSSH per-connection server daemon (139.178.89.65:56722). Mar 17 17:55:16.025503 sshd[2136]: Accepted publickey for core from 139.178.89.65 port 56722 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:16.029362 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:16.086351 systemd-logind[1884]: New session 1 of user core. Mar 17 17:55:16.090477 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:55:16.097779 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:55:16.114563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:55:16.124870 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:55:16.129090 (systemd)[2140]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:55:16.132221 systemd-logind[1884]: New session c1 of user core. Mar 17 17:55:16.326456 systemd[2140]: Queued start job for default target default.target. Mar 17 17:55:16.336673 systemd[2140]: Created slice app.slice - User Application Slice. Mar 17 17:55:16.336716 systemd[2140]: Reached target paths.target - Paths. Mar 17 17:55:16.336854 systemd[2140]: Reached target timers.target - Timers. Mar 17 17:55:16.339553 systemd[2140]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:55:16.363131 systemd[2140]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:55:16.363313 systemd[2140]: Reached target sockets.target - Sockets. Mar 17 17:55:16.363383 systemd[2140]: Reached target basic.target - Basic System. Mar 17 17:55:16.363439 systemd[2140]: Reached target default.target - Main User Target. Mar 17 17:55:16.363476 systemd[2140]: Startup finished in 223ms. Mar 17 17:55:16.363752 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:55:16.374529 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:55:16.555416 systemd[1]: Started sshd@1-172.31.26.100:22-139.178.89.65:56734.service - OpenSSH per-connection server daemon (139.178.89.65:56734). Mar 17 17:55:16.787323 sshd[2151]: Accepted publickey for core from 139.178.89.65 port 56734 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:16.788798 sshd-session[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:16.799563 systemd-logind[1884]: New session 2 of user core. Mar 17 17:55:16.806702 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:55:16.936619 sshd[2153]: Connection closed by 139.178.89.65 port 56734 Mar 17 17:55:16.938015 sshd-session[2151]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:16.943475 systemd[1]: sshd@1-172.31.26.100:22-139.178.89.65:56734.service: Deactivated successfully. Mar 17 17:55:16.946836 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:55:16.948088 systemd-logind[1884]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:55:16.949902 systemd-logind[1884]: Removed session 2. Mar 17 17:55:16.980513 systemd[1]: Started sshd@2-172.31.26.100:22-139.178.89.65:56742.service - OpenSSH per-connection server daemon (139.178.89.65:56742). Mar 17 17:55:17.173330 sshd[2159]: Accepted publickey for core from 139.178.89.65 port 56742 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:17.175145 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:17.186819 systemd-logind[1884]: New session 3 of user core. Mar 17 17:55:17.195557 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:55:17.320506 sshd[2161]: Connection closed by 139.178.89.65 port 56742 Mar 17 17:55:17.321191 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:17.335398 systemd[1]: sshd@2-172.31.26.100:22-139.178.89.65:56742.service: Deactivated successfully. Mar 17 17:55:17.341666 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:55:17.345620 systemd-logind[1884]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:55:17.366927 systemd[1]: Started sshd@3-172.31.26.100:22-139.178.89.65:56746.service - OpenSSH per-connection server daemon (139.178.89.65:56746). Mar 17 17:55:17.370238 systemd-logind[1884]: Removed session 3. Mar 17 17:55:17.544770 sshd[2166]: Accepted publickey for core from 139.178.89.65 port 56746 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:17.547410 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:17.556298 systemd-logind[1884]: New session 4 of user core. Mar 17 17:55:17.563500 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:55:17.703758 sshd[2169]: Connection closed by 139.178.89.65 port 56746 Mar 17 17:55:17.704443 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:17.708336 systemd[1]: sshd@3-172.31.26.100:22-139.178.89.65:56746.service: Deactivated successfully. Mar 17 17:55:17.711079 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:55:17.713205 systemd-logind[1884]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:55:17.717102 systemd-logind[1884]: Removed session 4. Mar 17 17:55:17.748707 systemd[1]: Started sshd@4-172.31.26.100:22-139.178.89.65:56756.service - OpenSSH per-connection server daemon (139.178.89.65:56756). Mar 17 17:55:17.911128 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 56756 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:17.913592 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:17.919768 systemd-logind[1884]: New session 5 of user core. Mar 17 17:55:17.931651 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:55:18.081863 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:55:18.083197 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:18.123664 sudo[2178]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:18.149383 sshd[2177]: Connection closed by 139.178.89.65 port 56756 Mar 17 17:55:18.150415 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:18.156066 systemd[1]: sshd@4-172.31.26.100:22-139.178.89.65:56756.service: Deactivated successfully. Mar 17 17:55:18.158657 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:55:18.159689 systemd-logind[1884]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:55:18.160781 systemd-logind[1884]: Removed session 5. Mar 17 17:55:18.192756 systemd[1]: Started sshd@5-172.31.26.100:22-139.178.89.65:56772.service - OpenSSH per-connection server daemon (139.178.89.65:56772). Mar 17 17:55:18.365929 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 56772 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:18.368311 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:18.378509 systemd-logind[1884]: New session 6 of user core. Mar 17 17:55:18.392635 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:55:18.500157 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:55:18.500646 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:18.509356 sudo[2188]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:18.520078 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:55:18.520481 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:18.540935 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:55:18.590012 augenrules[2210]: No rules Mar 17 17:55:18.591665 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:55:18.592072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:55:18.594694 sudo[2187]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:18.618016 sshd[2186]: Connection closed by 139.178.89.65 port 56772 Mar 17 17:55:18.618730 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:18.622670 systemd[1]: sshd@5-172.31.26.100:22-139.178.89.65:56772.service: Deactivated successfully. Mar 17 17:55:18.625454 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:55:18.627480 systemd-logind[1884]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:55:18.628611 systemd-logind[1884]: Removed session 6. Mar 17 17:55:18.657682 systemd[1]: Started sshd@6-172.31.26.100:22-139.178.89.65:56778.service - OpenSSH per-connection server daemon (139.178.89.65:56778). Mar 17 17:55:18.822563 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 56778 ssh2: RSA SHA256:/yGOgSijh5wOwphQZEYloo6+p719VCcrRIrr9gWE3V8 Mar 17 17:55:18.823631 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:18.829660 systemd-logind[1884]: New session 7 of user core. Mar 17 17:55:18.837523 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:55:18.939870 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:55:18.940403 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:20.016337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:20.016597 systemd[1]: kubelet.service: Consumed 937ms CPU time, 234.4M memory peak. Mar 17 17:55:20.022831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:20.066039 systemd[1]: Reload requested from client PID 2254 ('systemctl') (unit session-7.scope)... Mar 17 17:55:20.066057 systemd[1]: Reloading... Mar 17 17:55:20.295301 zram_generator::config[2299]: No configuration found. Mar 17 17:55:20.480258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:20.654309 systemd[1]: Reloading finished in 587 ms. Mar 17 17:55:20.739645 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:55:20.745152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:20.745904 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:55:20.746351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:20.746433 systemd[1]: kubelet.service: Consumed 129ms CPU time, 84.5M memory peak. Mar 17 17:55:20.754694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:21.006590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:21.024050 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:55:21.110353 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:55:21.111066 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:55:21.111066 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:55:21.113517 kubelet[2362]: I0317 17:55:21.111467 2362 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:55:21.897772 kubelet[2362]: I0317 17:55:21.897725 2362 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:55:21.897772 kubelet[2362]: I0317 17:55:21.897767 2362 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:55:21.900399 kubelet[2362]: I0317 17:55:21.898491 2362 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:55:21.957020 kubelet[2362]: I0317 17:55:21.956981 2362 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:55:21.967776 kubelet[2362]: E0317 17:55:21.967732 2362 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:55:21.967776 kubelet[2362]: I0317 17:55:21.967763 2362 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:55:21.974161 kubelet[2362]: I0317 17:55:21.974131 2362 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:55:21.976610 kubelet[2362]: I0317 17:55:21.976573 2362 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:55:21.976914 kubelet[2362]: I0317 17:55:21.976877 2362 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:55:21.977193 kubelet[2362]: I0317 17:55:21.976913 2362 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.26.100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:55:21.977364 kubelet[2362]: I0317 17:55:21.977200 2362 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:55:21.977364 kubelet[2362]: I0317 17:55:21.977251 2362 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:55:21.977448 kubelet[2362]: I0317 17:55:21.977427 2362 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:55:21.982752 kubelet[2362]: I0317 17:55:21.982578 2362 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:55:21.982752 kubelet[2362]: I0317 17:55:21.982643 2362 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:55:21.982752 kubelet[2362]: I0317 17:55:21.982683 2362 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:55:21.982752 kubelet[2362]: I0317 17:55:21.982701 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:55:21.983750 kubelet[2362]: E0317 17:55:21.983704 2362 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:21.984084 kubelet[2362]: E0317 17:55:21.983765 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:21.991233 kubelet[2362]: I0317 17:55:21.991055 2362 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:55:21.994594 kubelet[2362]: I0317 17:55:21.994521 2362 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:55:21.995735 kubelet[2362]: W0317 17:55:21.995659 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:55:21.997075 kubelet[2362]: I0317 17:55:21.997050 2362 server.go:1269] "Started kubelet" Mar 17 17:55:22.002320 kubelet[2362]: I0317 17:55:22.001071 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:55:22.002320 kubelet[2362]: I0317 17:55:22.001662 2362 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:55:22.002320 kubelet[2362]: I0317 17:55:22.001735 2362 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:55:22.003158 kubelet[2362]: I0317 17:55:22.003138 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:55:22.003824 kubelet[2362]: I0317 17:55:22.003807 2362 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:55:22.007572 kubelet[2362]: W0317 17:55:22.007113 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.26.100" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:55:22.007742 kubelet[2362]: E0317 17:55:22.007721 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.26.100\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:55:22.007971 kubelet[2362]: W0317 17:55:22.007954 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:55:22.008068 kubelet[2362]: E0317 17:55:22.008052 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:55:22.013396 kubelet[2362]: I0317 17:55:22.013362 2362 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:55:22.015442 kubelet[2362]: E0317 17:55:22.014124 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.015442 kubelet[2362]: I0317 17:55:22.015107 2362 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:55:22.015442 kubelet[2362]: I0317 17:55:22.015205 2362 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:55:22.031719 kubelet[2362]: E0317 17:55:22.031064 2362 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:55:22.036616 kubelet[2362]: I0317 17:55:22.036538 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:55:22.038171 kubelet[2362]: I0317 17:55:22.037977 2362 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:55:22.038343 kubelet[2362]: I0317 17:55:22.038223 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:55:22.054112 kubelet[2362]: I0317 17:55:22.053260 2362 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:55:22.059856 kubelet[2362]: E0317 17:55:22.059822 2362 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.26.100\" not found" node="172.31.26.100" Mar 17 17:55:22.078723 kubelet[2362]: I0317 17:55:22.077537 2362 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:55:22.078723 kubelet[2362]: I0317 17:55:22.077565 2362 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:55:22.078723 kubelet[2362]: I0317 17:55:22.077587 2362 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:55:22.087219 kubelet[2362]: I0317 17:55:22.087179 2362 policy_none.go:49] "None policy: Start" Mar 17 17:55:22.089131 kubelet[2362]: I0317 17:55:22.088376 2362 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:55:22.089131 kubelet[2362]: I0317 17:55:22.088403 2362 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:55:22.108925 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:55:22.114295 kubelet[2362]: E0317 17:55:22.114220 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.127727 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:55:22.136371 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:55:22.143665 kubelet[2362]: I0317 17:55:22.143636 2362 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:55:22.144958 kubelet[2362]: I0317 17:55:22.144802 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:55:22.145983 kubelet[2362]: I0317 17:55:22.145150 2362 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:55:22.145983 kubelet[2362]: I0317 17:55:22.145706 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:55:22.152928 kubelet[2362]: E0317 17:55:22.152582 2362 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.26.100\" not found" Mar 17 17:55:22.201692 kubelet[2362]: I0317 17:55:22.201639 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:55:22.206315 kubelet[2362]: I0317 17:55:22.205341 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:55:22.206315 kubelet[2362]: I0317 17:55:22.205397 2362 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:55:22.206315 kubelet[2362]: I0317 17:55:22.205435 2362 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:55:22.206315 kubelet[2362]: E0317 17:55:22.205502 2362 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:55:22.247940 kubelet[2362]: I0317 17:55:22.247632 2362 kubelet_node_status.go:72] "Attempting to register node" node="172.31.26.100" Mar 17 17:55:22.270042 kubelet[2362]: I0317 17:55:22.270006 2362 kubelet_node_status.go:75] "Successfully registered node" node="172.31.26.100" Mar 17 17:55:22.270042 kubelet[2362]: E0317 17:55:22.270045 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.26.100\": node \"172.31.26.100\" not found" Mar 17 17:55:22.319607 kubelet[2362]: E0317 17:55:22.319571 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.420404 kubelet[2362]: E0317 17:55:22.420054 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.520431 kubelet[2362]: E0317 17:55:22.520349 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.570332 sudo[2222]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:22.593185 sshd[2221]: Connection closed by 139.178.89.65 port 56778 Mar 17 17:55:22.595681 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:22.601580 systemd[1]: sshd@6-172.31.26.100:22-139.178.89.65:56778.service: Deactivated successfully. Mar 17 17:55:22.604144 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:55:22.604506 systemd[1]: session-7.scope: Consumed 505ms CPU time, 71.9M memory peak. Mar 17 17:55:22.606994 systemd-logind[1884]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:55:22.608591 systemd-logind[1884]: Removed session 7. Mar 17 17:55:22.621081 kubelet[2362]: E0317 17:55:22.621034 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.721657 kubelet[2362]: E0317 17:55:22.721601 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.822292 kubelet[2362]: E0317 17:55:22.822184 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.903476 kubelet[2362]: I0317 17:55:22.903433 2362 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:55:22.903740 kubelet[2362]: W0317 17:55:22.903692 2362 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:55:22.903823 kubelet[2362]: W0317 17:55:22.903756 2362 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:55:22.922393 kubelet[2362]: E0317 17:55:22.922355 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:22.984666 kubelet[2362]: E0317 17:55:22.984530 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:23.023212 kubelet[2362]: E0317 17:55:23.023167 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:23.123664 kubelet[2362]: E0317 17:55:23.123619 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:23.224752 kubelet[2362]: E0317 17:55:23.224710 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:23.325700 kubelet[2362]: E0317 17:55:23.325580 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:23.426256 kubelet[2362]: E0317 17:55:23.426203 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:23.527384 kubelet[2362]: E0317 17:55:23.527337 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.26.100\" not found" Mar 17 17:55:23.629051 kubelet[2362]: I0317 17:55:23.628937 2362 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:55:23.629603 containerd[1904]: time="2025-03-17T17:55:23.629453161Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:55:23.630761 kubelet[2362]: I0317 17:55:23.629711 2362 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:55:23.984915 kubelet[2362]: I0317 17:55:23.984869 2362 apiserver.go:52] "Watching apiserver" Mar 17 17:55:23.985103 kubelet[2362]: E0317 17:55:23.984863 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:23.999815 kubelet[2362]: E0317 17:55:23.999611 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:24.019778 kubelet[2362]: I0317 17:55:24.018692 2362 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:55:24.028981 kubelet[2362]: I0317 17:55:24.028951 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-flexvol-driver-host\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.030672 kubelet[2362]: I0317 17:55:24.030593 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d1a937b-4e87-4260-a1b6-2ddb288ceef3-registration-dir\") pod \"csi-node-driver-zq588\" (UID: \"1d1a937b-4e87-4260-a1b6-2ddb288ceef3\") " pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:24.030886 kubelet[2362]: I0317 17:55:24.030867 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5th\" (UniqueName: \"kubernetes.io/projected/f61bc243-ef77-4343-be86-3ac2bc7d9520-kube-api-access-7x5th\") pod \"kube-proxy-czhrf\" (UID: \"f61bc243-ef77-4343-be86-3ac2bc7d9520\") " pod="kube-system/kube-proxy-czhrf" Mar 17 17:55:24.031292 kubelet[2362]: I0317 17:55:24.030983 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-lib-modules\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.031292 kubelet[2362]: I0317 17:55:24.031013 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73374b66-2ee4-47c9-8aae-384bef6da462-node-certs\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.031292 kubelet[2362]: I0317 17:55:24.031040 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-bin-dir\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.031292 kubelet[2362]: I0317 17:55:24.031068 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-net-dir\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.031292 kubelet[2362]: I0317 17:55:24.031093 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lg85\" (UniqueName: \"kubernetes.io/projected/73374b66-2ee4-47c9-8aae-384bef6da462-kube-api-access-8lg85\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.032191 kubelet[2362]: I0317 17:55:24.031118 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d1a937b-4e87-4260-a1b6-2ddb288ceef3-kubelet-dir\") pod \"csi-node-driver-zq588\" (UID: \"1d1a937b-4e87-4260-a1b6-2ddb288ceef3\") " pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:24.032191 kubelet[2362]: I0317 17:55:24.031142 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-run-calico\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.032191 kubelet[2362]: I0317 17:55:24.031178 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-log-dir\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.032191 kubelet[2362]: I0317 17:55:24.031202 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d1a937b-4e87-4260-a1b6-2ddb288ceef3-socket-dir\") pod \"csi-node-driver-zq588\" (UID: \"1d1a937b-4e87-4260-a1b6-2ddb288ceef3\") " pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:24.032191 kubelet[2362]: I0317 17:55:24.031234 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rsd\" (UniqueName: \"kubernetes.io/projected/1d1a937b-4e87-4260-a1b6-2ddb288ceef3-kube-api-access-v8rsd\") pod \"csi-node-driver-zq588\" (UID: \"1d1a937b-4e87-4260-a1b6-2ddb288ceef3\") " pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:24.032425 kubelet[2362]: I0317 17:55:24.031258 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f61bc243-ef77-4343-be86-3ac2bc7d9520-xtables-lock\") pod \"kube-proxy-czhrf\" (UID: \"f61bc243-ef77-4343-be86-3ac2bc7d9520\") " pod="kube-system/kube-proxy-czhrf" Mar 17 17:55:24.032235 systemd[1]: Created slice kubepods-besteffort-podf61bc243_ef77_4343_be86_3ac2bc7d9520.slice - libcontainer container kubepods-besteffort-podf61bc243_ef77_4343_be86_3ac2bc7d9520.slice. Mar 17 17:55:24.036907 kubelet[2362]: I0317 17:55:24.036416 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f61bc243-ef77-4343-be86-3ac2bc7d9520-lib-modules\") pod \"kube-proxy-czhrf\" (UID: \"f61bc243-ef77-4343-be86-3ac2bc7d9520\") " pod="kube-system/kube-proxy-czhrf" Mar 17 17:55:24.036907 kubelet[2362]: I0317 17:55:24.036476 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-xtables-lock\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.036907 kubelet[2362]: I0317 17:55:24.036500 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-policysync\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.036907 kubelet[2362]: I0317 17:55:24.036544 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73374b66-2ee4-47c9-8aae-384bef6da462-tigera-ca-bundle\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.036907 kubelet[2362]: I0317 17:55:24.036569 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-lib-calico\") pod \"calico-node-tz4vw\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " pod="calico-system/calico-node-tz4vw" Mar 17 17:55:24.037108 kubelet[2362]: I0317 17:55:24.036596 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1d1a937b-4e87-4260-a1b6-2ddb288ceef3-varrun\") pod \"csi-node-driver-zq588\" (UID: \"1d1a937b-4e87-4260-a1b6-2ddb288ceef3\") " pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:24.037108 kubelet[2362]: I0317 17:55:24.036637 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f61bc243-ef77-4343-be86-3ac2bc7d9520-kube-proxy\") pod \"kube-proxy-czhrf\" (UID: \"f61bc243-ef77-4343-be86-3ac2bc7d9520\") " pod="kube-system/kube-proxy-czhrf" Mar 17 17:55:24.048313 systemd[1]: Created slice kubepods-besteffort-pod73374b66_2ee4_47c9_8aae_384bef6da462.slice - libcontainer container kubepods-besteffort-pod73374b66_2ee4_47c9_8aae_384bef6da462.slice. Mar 17 17:55:24.142037 kubelet[2362]: E0317 17:55:24.141785 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.142037 kubelet[2362]: W0317 17:55:24.141811 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.142037 kubelet[2362]: E0317 17:55:24.141857 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.145292 kubelet[2362]: E0317 17:55:24.145248 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.145365 kubelet[2362]: W0317 17:55:24.145293 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.145365 kubelet[2362]: E0317 17:55:24.145317 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.224294 kubelet[2362]: E0317 17:55:24.221539 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.224294 kubelet[2362]: W0317 17:55:24.221583 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.224294 kubelet[2362]: E0317 17:55:24.221613 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.239143 kubelet[2362]: E0317 17:55:24.238709 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.239143 kubelet[2362]: W0317 17:55:24.238743 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.239143 kubelet[2362]: E0317 17:55:24.238819 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.239143 kubelet[2362]: E0317 17:55:24.239139 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.239855 kubelet[2362]: W0317 17:55:24.239152 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.239855 kubelet[2362]: E0317 17:55:24.239170 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.241639 kubelet[2362]: E0317 17:55:24.239948 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.241639 kubelet[2362]: W0317 17:55:24.239967 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.241639 kubelet[2362]: E0317 17:55:24.239996 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.241639 kubelet[2362]: E0317 17:55:24.240233 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.241639 kubelet[2362]: W0317 17:55:24.240243 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.241639 kubelet[2362]: E0317 17:55:24.240255 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.241639 kubelet[2362]: E0317 17:55:24.241214 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.241639 kubelet[2362]: W0317 17:55:24.241227 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.241639 kubelet[2362]: E0317 17:55:24.241242 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.245068 kubelet[2362]: E0317 17:55:24.244773 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.245068 kubelet[2362]: W0317 17:55:24.244793 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.245068 kubelet[2362]: E0317 17:55:24.244812 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.246405 kubelet[2362]: E0317 17:55:24.246383 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.246405 kubelet[2362]: W0317 17:55:24.246404 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.246524 kubelet[2362]: E0317 17:55:24.246421 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.249390 kubelet[2362]: E0317 17:55:24.249362 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.249390 kubelet[2362]: W0317 17:55:24.249382 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.249572 kubelet[2362]: E0317 17:55:24.249402 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.249757 kubelet[2362]: E0317 17:55:24.249744 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:24.249757 kubelet[2362]: W0317 17:55:24.249755 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:24.249841 kubelet[2362]: E0317 17:55:24.249772 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:24.343538 containerd[1904]: time="2025-03-17T17:55:24.343491678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czhrf,Uid:f61bc243-ef77-4343-be86-3ac2bc7d9520,Namespace:kube-system,Attempt:0,}" Mar 17 17:55:24.355383 containerd[1904]: time="2025-03-17T17:55:24.355341823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tz4vw,Uid:73374b66-2ee4-47c9-8aae-384bef6da462,Namespace:calico-system,Attempt:0,}" Mar 17 17:55:24.985822 kubelet[2362]: E0317 17:55:24.985747 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:24.994100 containerd[1904]: time="2025-03-17T17:55:24.994043594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:55:24.997819 containerd[1904]: time="2025-03-17T17:55:24.997672121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:55:24.999943 containerd[1904]: time="2025-03-17T17:55:24.999891603Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:55:25.002223 containerd[1904]: time="2025-03-17T17:55:25.002172425Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:55:25.003701 containerd[1904]: time="2025-03-17T17:55:25.003639177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:55:25.009287 containerd[1904]: time="2025-03-17T17:55:25.007903294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:55:25.009287 containerd[1904]: time="2025-03-17T17:55:25.008719140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.101224ms" Mar 17 17:55:25.012509 containerd[1904]: time="2025-03-17T17:55:25.012459442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.007349ms" Mar 17 17:55:25.157807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267756907.mount: Deactivated successfully. Mar 17 17:55:25.206528 kubelet[2362]: E0317 17:55:25.205967 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:25.433031 containerd[1904]: time="2025-03-17T17:55:25.432350423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:55:25.433031 containerd[1904]: time="2025-03-17T17:55:25.432425317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:55:25.433031 containerd[1904]: time="2025-03-17T17:55:25.432450017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:25.433031 containerd[1904]: time="2025-03-17T17:55:25.432565754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:25.434299 containerd[1904]: time="2025-03-17T17:55:25.433515696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:55:25.434299 containerd[1904]: time="2025-03-17T17:55:25.433582167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:55:25.434299 containerd[1904]: time="2025-03-17T17:55:25.433606182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:25.434299 containerd[1904]: time="2025-03-17T17:55:25.434070551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:25.641815 systemd[1]: run-containerd-runc-k8s.io-2b381df468cd5430e826d59c01ccf0bedbee312a6b14e2b5d822e7675bf572d0-runc.xdsKEy.mount: Deactivated successfully. Mar 17 17:55:25.652708 systemd[1]: Started cri-containerd-2b381df468cd5430e826d59c01ccf0bedbee312a6b14e2b5d822e7675bf572d0.scope - libcontainer container 2b381df468cd5430e826d59c01ccf0bedbee312a6b14e2b5d822e7675bf572d0. Mar 17 17:55:25.656258 systemd[1]: Started cri-containerd-ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc.scope - libcontainer container ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc. Mar 17 17:55:25.705755 containerd[1904]: time="2025-03-17T17:55:25.705670630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czhrf,Uid:f61bc243-ef77-4343-be86-3ac2bc7d9520,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b381df468cd5430e826d59c01ccf0bedbee312a6b14e2b5d822e7675bf572d0\"" Mar 17 17:55:25.709176 containerd[1904]: time="2025-03-17T17:55:25.709139010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tz4vw,Uid:73374b66-2ee4-47c9-8aae-384bef6da462,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\"" Mar 17 17:55:25.712681 containerd[1904]: time="2025-03-17T17:55:25.712646042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:55:25.986326 kubelet[2362]: E0317 17:55:25.986193 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:26.987231 kubelet[2362]: E0317 17:55:26.987161 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:27.009778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514518802.mount: Deactivated successfully. Mar 17 17:55:27.206042 kubelet[2362]: E0317 17:55:27.205990 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:27.835577 containerd[1904]: time="2025-03-17T17:55:27.835527790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:27.836794 containerd[1904]: time="2025-03-17T17:55:27.836655231Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 17 17:55:27.838949 containerd[1904]: time="2025-03-17T17:55:27.837796549Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:27.840083 containerd[1904]: time="2025-03-17T17:55:27.839900297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:27.840617 containerd[1904]: time="2025-03-17T17:55:27.840582017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 2.12789638s" Mar 17 17:55:27.840705 containerd[1904]: time="2025-03-17T17:55:27.840624381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 17:55:27.842754 containerd[1904]: time="2025-03-17T17:55:27.842465292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:55:27.843389 containerd[1904]: time="2025-03-17T17:55:27.843358508Z" level=info msg="CreateContainer within sandbox \"2b381df468cd5430e826d59c01ccf0bedbee312a6b14e2b5d822e7675bf572d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:55:27.871050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182453629.mount: Deactivated successfully. Mar 17 17:55:27.889624 containerd[1904]: time="2025-03-17T17:55:27.889566685Z" level=info msg="CreateContainer within sandbox \"2b381df468cd5430e826d59c01ccf0bedbee312a6b14e2b5d822e7675bf572d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a10006b2093321987a3653e595617973ac9d40c4b912db67bc97fec8ba400547\"" Mar 17 17:55:27.890695 containerd[1904]: time="2025-03-17T17:55:27.890652219Z" level=info msg="StartContainer for \"a10006b2093321987a3653e595617973ac9d40c4b912db67bc97fec8ba400547\"" Mar 17 17:55:27.927625 systemd[1]: Started cri-containerd-a10006b2093321987a3653e595617973ac9d40c4b912db67bc97fec8ba400547.scope - libcontainer container a10006b2093321987a3653e595617973ac9d40c4b912db67bc97fec8ba400547. Mar 17 17:55:27.970550 containerd[1904]: time="2025-03-17T17:55:27.970356527Z" level=info msg="StartContainer for \"a10006b2093321987a3653e595617973ac9d40c4b912db67bc97fec8ba400547\" returns successfully" Mar 17 17:55:27.989509 kubelet[2362]: E0317 17:55:27.989452 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:28.351341 kubelet[2362]: E0317 17:55:28.350697 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.351341 kubelet[2362]: W0317 17:55:28.350747 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.351341 kubelet[2362]: E0317 17:55:28.350777 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.351341 kubelet[2362]: E0317 17:55:28.351110 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.351341 kubelet[2362]: W0317 17:55:28.351123 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.351341 kubelet[2362]: E0317 17:55:28.351140 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.351719 kubelet[2362]: E0317 17:55:28.351500 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.351719 kubelet[2362]: W0317 17:55:28.351527 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.351719 kubelet[2362]: E0317 17:55:28.351543 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.351850 kubelet[2362]: E0317 17:55:28.351805 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.351850 kubelet[2362]: W0317 17:55:28.351816 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.351850 kubelet[2362]: E0317 17:55:28.351844 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.352885 kubelet[2362]: E0317 17:55:28.352113 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.352885 kubelet[2362]: W0317 17:55:28.352130 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.352885 kubelet[2362]: E0317 17:55:28.352173 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.352885 kubelet[2362]: E0317 17:55:28.352490 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.352885 kubelet[2362]: W0317 17:55:28.352504 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.352885 kubelet[2362]: E0317 17:55:28.352538 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.352885 kubelet[2362]: E0317 17:55:28.352843 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.352885 kubelet[2362]: W0317 17:55:28.352888 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.353317 kubelet[2362]: E0317 17:55:28.352902 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.353317 kubelet[2362]: E0317 17:55:28.353209 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.353317 kubelet[2362]: W0317 17:55:28.353219 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.353317 kubelet[2362]: E0317 17:55:28.353232 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.354298 kubelet[2362]: E0317 17:55:28.353545 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.354298 kubelet[2362]: W0317 17:55:28.353558 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.354298 kubelet[2362]: E0317 17:55:28.353571 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.354298 kubelet[2362]: E0317 17:55:28.353897 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.354298 kubelet[2362]: W0317 17:55:28.353908 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.354298 kubelet[2362]: E0317 17:55:28.353922 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.354298 kubelet[2362]: E0317 17:55:28.354179 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.354298 kubelet[2362]: W0317 17:55:28.354190 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.354298 kubelet[2362]: E0317 17:55:28.354212 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.354722 kubelet[2362]: E0317 17:55:28.354482 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.354722 kubelet[2362]: W0317 17:55:28.354492 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.354722 kubelet[2362]: E0317 17:55:28.354515 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.354845 kubelet[2362]: E0317 17:55:28.354761 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.354845 kubelet[2362]: W0317 17:55:28.354772 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.354845 kubelet[2362]: E0317 17:55:28.354784 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.355876 kubelet[2362]: E0317 17:55:28.355047 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.355876 kubelet[2362]: W0317 17:55:28.355059 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.355876 kubelet[2362]: E0317 17:55:28.355072 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.355876 kubelet[2362]: E0317 17:55:28.355386 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.355876 kubelet[2362]: W0317 17:55:28.355397 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.355876 kubelet[2362]: E0317 17:55:28.355410 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.355876 kubelet[2362]: E0317 17:55:28.355661 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.355876 kubelet[2362]: W0317 17:55:28.355670 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.355876 kubelet[2362]: E0317 17:55:28.355683 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.356299 kubelet[2362]: E0317 17:55:28.355941 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.356299 kubelet[2362]: W0317 17:55:28.355951 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.356299 kubelet[2362]: E0317 17:55:28.355966 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.356299 kubelet[2362]: E0317 17:55:28.356200 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.356299 kubelet[2362]: W0317 17:55:28.356209 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.356299 kubelet[2362]: E0317 17:55:28.356222 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.356546 kubelet[2362]: E0317 17:55:28.356489 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.356546 kubelet[2362]: W0317 17:55:28.356508 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.356546 kubelet[2362]: E0317 17:55:28.356521 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.357401 kubelet[2362]: E0317 17:55:28.356839 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.357401 kubelet[2362]: W0317 17:55:28.356862 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.357401 kubelet[2362]: E0317 17:55:28.356877 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.381676 kubelet[2362]: E0317 17:55:28.381646 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.381952 kubelet[2362]: W0317 17:55:28.381730 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.381952 kubelet[2362]: E0317 17:55:28.381756 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.386592 kubelet[2362]: E0317 17:55:28.386469 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.386592 kubelet[2362]: W0317 17:55:28.386519 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.386592 kubelet[2362]: E0317 17:55:28.386552 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.388795 kubelet[2362]: E0317 17:55:28.388766 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.388795 kubelet[2362]: W0317 17:55:28.388788 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.389071 kubelet[2362]: E0317 17:55:28.388817 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.389131 kubelet[2362]: E0317 17:55:28.389088 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.389131 kubelet[2362]: W0317 17:55:28.389118 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.389236 kubelet[2362]: E0317 17:55:28.389196 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.390487 kubelet[2362]: E0317 17:55:28.389634 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.390487 kubelet[2362]: W0317 17:55:28.389649 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.390487 kubelet[2362]: E0317 17:55:28.389696 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.391424 kubelet[2362]: E0317 17:55:28.390773 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.391424 kubelet[2362]: W0317 17:55:28.390788 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.391424 kubelet[2362]: E0317 17:55:28.390814 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.391424 kubelet[2362]: E0317 17:55:28.391040 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.391424 kubelet[2362]: W0317 17:55:28.391049 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.391693 kubelet[2362]: E0317 17:55:28.391427 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.391693 kubelet[2362]: E0317 17:55:28.391598 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.391693 kubelet[2362]: W0317 17:55:28.391611 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.391693 kubelet[2362]: E0317 17:55:28.391628 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.394318 kubelet[2362]: E0317 17:55:28.393050 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.394318 kubelet[2362]: W0317 17:55:28.393065 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.394318 kubelet[2362]: E0317 17:55:28.393092 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.394318 kubelet[2362]: E0317 17:55:28.393731 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.394318 kubelet[2362]: W0317 17:55:28.393742 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.394318 kubelet[2362]: E0317 17:55:28.393759 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.395490 kubelet[2362]: E0317 17:55:28.395055 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.395490 kubelet[2362]: W0317 17:55:28.395068 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.395490 kubelet[2362]: E0317 17:55:28.395239 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.396006 kubelet[2362]: E0317 17:55:28.395689 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:28.396006 kubelet[2362]: W0317 17:55:28.395705 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:28.396006 kubelet[2362]: E0317 17:55:28.395719 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:28.990285 kubelet[2362]: E0317 17:55:28.990207 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:29.205731 kubelet[2362]: E0317 17:55:29.205668 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:29.263094 kubelet[2362]: E0317 17:55:29.262977 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.263094 kubelet[2362]: W0317 17:55:29.263002 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.263094 kubelet[2362]: E0317 17:55:29.263025 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.263354 kubelet[2362]: E0317 17:55:29.263328 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.263354 kubelet[2362]: W0317 17:55:29.263341 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.263449 kubelet[2362]: E0317 17:55:29.263371 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.263959 kubelet[2362]: E0317 17:55:29.263630 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.263959 kubelet[2362]: W0317 17:55:29.263643 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.263959 kubelet[2362]: E0317 17:55:29.263656 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.263959 kubelet[2362]: E0317 17:55:29.263897 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.263959 kubelet[2362]: W0317 17:55:29.263923 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.263959 kubelet[2362]: E0317 17:55:29.263935 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.264288 kubelet[2362]: E0317 17:55:29.264195 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.264288 kubelet[2362]: W0317 17:55:29.264206 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.264288 kubelet[2362]: E0317 17:55:29.264218 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.264497 kubelet[2362]: E0317 17:55:29.264479 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.264497 kubelet[2362]: W0317 17:55:29.264493 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.264607 kubelet[2362]: E0317 17:55:29.264506 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.264881 kubelet[2362]: E0317 17:55:29.264749 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.264881 kubelet[2362]: W0317 17:55:29.264776 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.264881 kubelet[2362]: E0317 17:55:29.264789 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.265093 kubelet[2362]: E0317 17:55:29.265072 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.265093 kubelet[2362]: W0317 17:55:29.265087 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.265237 kubelet[2362]: E0317 17:55:29.265133 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.265390 kubelet[2362]: E0317 17:55:29.265369 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.265470 kubelet[2362]: W0317 17:55:29.265389 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.265470 kubelet[2362]: E0317 17:55:29.265403 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.265623 kubelet[2362]: E0317 17:55:29.265596 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.265623 kubelet[2362]: W0317 17:55:29.265609 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.265623 kubelet[2362]: E0317 17:55:29.265622 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.265854 kubelet[2362]: E0317 17:55:29.265811 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.265854 kubelet[2362]: W0317 17:55:29.265820 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.265854 kubelet[2362]: E0317 17:55:29.265832 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.266049 kubelet[2362]: E0317 17:55:29.266027 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.266049 kubelet[2362]: W0317 17:55:29.266037 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.266200 kubelet[2362]: E0317 17:55:29.266048 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.266289 kubelet[2362]: E0317 17:55:29.266246 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.266289 kubelet[2362]: W0317 17:55:29.266259 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.266289 kubelet[2362]: E0317 17:55:29.266285 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.266514 kubelet[2362]: E0317 17:55:29.266476 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.266514 kubelet[2362]: W0317 17:55:29.266490 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.266514 kubelet[2362]: E0317 17:55:29.266501 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.266730 kubelet[2362]: E0317 17:55:29.266685 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.266730 kubelet[2362]: W0317 17:55:29.266695 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.266730 kubelet[2362]: E0317 17:55:29.266706 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.266932 kubelet[2362]: E0317 17:55:29.266900 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.266932 kubelet[2362]: W0317 17:55:29.266910 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.266932 kubelet[2362]: E0317 17:55:29.266921 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.267153 kubelet[2362]: E0317 17:55:29.267109 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.267153 kubelet[2362]: W0317 17:55:29.267122 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.267153 kubelet[2362]: E0317 17:55:29.267134 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.267370 kubelet[2362]: E0317 17:55:29.267342 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.267370 kubelet[2362]: W0317 17:55:29.267357 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.267515 kubelet[2362]: E0317 17:55:29.267368 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.267574 kubelet[2362]: E0317 17:55:29.267563 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.267616 kubelet[2362]: W0317 17:55:29.267576 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.267616 kubelet[2362]: E0317 17:55:29.267588 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.267815 kubelet[2362]: E0317 17:55:29.267787 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.267815 kubelet[2362]: W0317 17:55:29.267813 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.267907 kubelet[2362]: E0317 17:55:29.267829 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.295182 kubelet[2362]: E0317 17:55:29.295149 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.295182 kubelet[2362]: W0317 17:55:29.295173 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.295460 kubelet[2362]: E0317 17:55:29.295196 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.295599 kubelet[2362]: E0317 17:55:29.295579 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.295599 kubelet[2362]: W0317 17:55:29.295598 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.295716 kubelet[2362]: E0317 17:55:29.295630 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.295981 kubelet[2362]: E0317 17:55:29.295959 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.295981 kubelet[2362]: W0317 17:55:29.295975 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.296224 kubelet[2362]: E0317 17:55:29.295997 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.296406 kubelet[2362]: E0317 17:55:29.296388 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.296406 kubelet[2362]: W0317 17:55:29.296403 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.296525 kubelet[2362]: E0317 17:55:29.296423 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.296849 kubelet[2362]: E0317 17:55:29.296661 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.296849 kubelet[2362]: W0317 17:55:29.296674 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.296849 kubelet[2362]: E0317 17:55:29.296714 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.297029 kubelet[2362]: E0317 17:55:29.297017 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.297304 kubelet[2362]: W0317 17:55:29.297084 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.297304 kubelet[2362]: E0317 17:55:29.297107 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.297569 kubelet[2362]: E0317 17:55:29.297550 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.297569 kubelet[2362]: W0317 17:55:29.297566 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.297732 kubelet[2362]: E0317 17:55:29.297693 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.297789 kubelet[2362]: E0317 17:55:29.297781 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.297838 kubelet[2362]: W0317 17:55:29.297791 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.297838 kubelet[2362]: E0317 17:55:29.297803 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.298018 kubelet[2362]: E0317 17:55:29.298001 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.298018 kubelet[2362]: W0317 17:55:29.298014 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.298123 kubelet[2362]: E0317 17:55:29.298027 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.298254 kubelet[2362]: E0317 17:55:29.298237 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.298254 kubelet[2362]: W0317 17:55:29.298250 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.298379 kubelet[2362]: E0317 17:55:29.298263 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.298545 kubelet[2362]: E0317 17:55:29.298518 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.298545 kubelet[2362]: W0317 17:55:29.298531 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.298640 kubelet[2362]: E0317 17:55:29.298545 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.299025 kubelet[2362]: E0317 17:55:29.299007 2362 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:55:29.299086 kubelet[2362]: W0317 17:55:29.299032 2362 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:55:29.299086 kubelet[2362]: E0317 17:55:29.299046 2362 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:55:29.725557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831117965.mount: Deactivated successfully. Mar 17 17:55:29.876323 containerd[1904]: time="2025-03-17T17:55:29.876254834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:29.879009 containerd[1904]: time="2025-03-17T17:55:29.878851381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 17:55:29.882036 containerd[1904]: time="2025-03-17T17:55:29.881659937Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:29.885511 containerd[1904]: time="2025-03-17T17:55:29.885442892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:29.886693 containerd[1904]: time="2025-03-17T17:55:29.886645979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.044071098s" Mar 17 17:55:29.886693 containerd[1904]: time="2025-03-17T17:55:29.886685517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:55:29.896826 containerd[1904]: time="2025-03-17T17:55:29.896609304Z" level=info msg="CreateContainer within sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:55:29.942786 containerd[1904]: time="2025-03-17T17:55:29.942726977Z" level=info msg="CreateContainer within sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\"" Mar 17 17:55:29.943519 containerd[1904]: time="2025-03-17T17:55:29.943478147Z" level=info msg="StartContainer for \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\"" Mar 17 17:55:29.991927 kubelet[2362]: E0317 17:55:29.991795 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:30.041790 systemd[1]: Started cri-containerd-1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53.scope - libcontainer container 1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53. Mar 17 17:55:30.083567 containerd[1904]: time="2025-03-17T17:55:30.083520299Z" level=info msg="StartContainer for \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\" returns successfully" Mar 17 17:55:30.095312 systemd[1]: cri-containerd-1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53.scope: Deactivated successfully. Mar 17 17:55:30.221651 containerd[1904]: time="2025-03-17T17:55:30.220313041Z" level=info msg="shim disconnected" id=1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53 namespace=k8s.io Mar 17 17:55:30.221651 containerd[1904]: time="2025-03-17T17:55:30.221649116Z" level=warning msg="cleaning up after shim disconnected" id=1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53 namespace=k8s.io Mar 17 17:55:30.222071 containerd[1904]: time="2025-03-17T17:55:30.221665812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:55:30.265057 containerd[1904]: time="2025-03-17T17:55:30.264450027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:55:30.313017 kubelet[2362]: I0317 17:55:30.312457 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-czhrf" podStartSLOduration=6.182548256 podStartE2EDuration="8.312432436s" podCreationTimestamp="2025-03-17 17:55:22 +0000 UTC" firstStartedPulling="2025-03-17 17:55:25.711866418 +0000 UTC m=+4.679135896" lastFinishedPulling="2025-03-17 17:55:27.841750584 +0000 UTC m=+6.809020076" observedRunningTime="2025-03-17 17:55:28.306905316 +0000 UTC m=+7.274174829" watchObservedRunningTime="2025-03-17 17:55:30.312432436 +0000 UTC m=+9.279701937" Mar 17 17:55:30.637535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53-rootfs.mount: Deactivated successfully. Mar 17 17:55:30.995295 kubelet[2362]: E0317 17:55:30.992332 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:31.206806 kubelet[2362]: E0317 17:55:31.206753 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:31.992863 kubelet[2362]: E0317 17:55:31.992809 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:32.993815 kubelet[2362]: E0317 17:55:32.993324 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:33.206371 kubelet[2362]: E0317 17:55:33.206316 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:33.994036 kubelet[2362]: E0317 17:55:33.994001 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:34.994769 kubelet[2362]: E0317 17:55:34.994722 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:35.206773 kubelet[2362]: E0317 17:55:35.206168 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:35.350386 containerd[1904]: time="2025-03-17T17:55:35.350227392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:35.352704 containerd[1904]: time="2025-03-17T17:55:35.352518523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:55:35.354915 containerd[1904]: time="2025-03-17T17:55:35.354606522Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:35.362522 containerd[1904]: time="2025-03-17T17:55:35.362473821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:35.363607 containerd[1904]: time="2025-03-17T17:55:35.363568737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 5.099073429s" Mar 17 17:55:35.363872 containerd[1904]: time="2025-03-17T17:55:35.363760845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:55:35.371193 containerd[1904]: time="2025-03-17T17:55:35.370261526Z" level=info msg="CreateContainer within sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:55:35.409745 containerd[1904]: time="2025-03-17T17:55:35.409688042Z" level=info msg="CreateContainer within sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\"" Mar 17 17:55:35.410488 containerd[1904]: time="2025-03-17T17:55:35.410456928Z" level=info msg="StartContainer for \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\"" Mar 17 17:55:35.448420 systemd[1]: run-containerd-runc-k8s.io-5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79-runc.5rF0u1.mount: Deactivated successfully. Mar 17 17:55:35.455492 systemd[1]: Started cri-containerd-5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79.scope - libcontainer container 5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79. Mar 17 17:55:35.502670 containerd[1904]: time="2025-03-17T17:55:35.501952033Z" level=info msg="StartContainer for \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\" returns successfully" Mar 17 17:55:35.995113 kubelet[2362]: E0317 17:55:35.995054 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:36.056876 containerd[1904]: time="2025-03-17T17:55:36.056818256Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:55:36.060160 systemd[1]: cri-containerd-5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79.scope: Deactivated successfully. Mar 17 17:55:36.060513 systemd[1]: cri-containerd-5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79.scope: Consumed 510ms CPU time, 172.3M memory peak, 154M written to disk. Mar 17 17:55:36.070942 kubelet[2362]: I0317 17:55:36.070220 2362 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:55:36.086673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79-rootfs.mount: Deactivated successfully. Mar 17 17:55:36.807231 containerd[1904]: time="2025-03-17T17:55:36.807147083Z" level=info msg="shim disconnected" id=5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79 namespace=k8s.io Mar 17 17:55:36.807231 containerd[1904]: time="2025-03-17T17:55:36.807223229Z" level=warning msg="cleaning up after shim disconnected" id=5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79 namespace=k8s.io Mar 17 17:55:36.807231 containerd[1904]: time="2025-03-17T17:55:36.807235157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:55:36.995921 kubelet[2362]: E0317 17:55:36.995770 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:37.216155 systemd[1]: Created slice kubepods-besteffort-pod1d1a937b_4e87_4260_a1b6_2ddb288ceef3.slice - libcontainer container kubepods-besteffort-pod1d1a937b_4e87_4260_a1b6_2ddb288ceef3.slice. Mar 17 17:55:37.219292 containerd[1904]: time="2025-03-17T17:55:37.219222196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:0,}" Mar 17 17:55:37.296131 containerd[1904]: time="2025-03-17T17:55:37.295567670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:55:37.313637 containerd[1904]: time="2025-03-17T17:55:37.313503798Z" level=error msg="Failed to destroy network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:37.317431 containerd[1904]: time="2025-03-17T17:55:37.314006168Z" level=error msg="encountered an error cleaning up failed sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:37.317431 containerd[1904]: time="2025-03-17T17:55:37.314085130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:37.316998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c-shm.mount: Deactivated successfully. Mar 17 17:55:37.317632 kubelet[2362]: E0317 17:55:37.314323 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:37.317632 kubelet[2362]: E0317 17:55:37.314402 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:37.317632 kubelet[2362]: E0317 17:55:37.314431 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:37.317730 kubelet[2362]: E0317 17:55:37.314506 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:37.428479 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:55:37.996206 kubelet[2362]: E0317 17:55:37.996150 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:38.297521 kubelet[2362]: I0317 17:55:38.296958 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c" Mar 17 17:55:38.297904 containerd[1904]: time="2025-03-17T17:55:38.297870714Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:38.301311 containerd[1904]: time="2025-03-17T17:55:38.298132852Z" level=info msg="Ensure that sandbox d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c in task-service has been cleanup successfully" Mar 17 17:55:38.301553 containerd[1904]: time="2025-03-17T17:55:38.301521638Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:38.301685 containerd[1904]: time="2025-03-17T17:55:38.301553028Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:38.302287 containerd[1904]: time="2025-03-17T17:55:38.302237750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:1,}" Mar 17 17:55:38.303055 systemd[1]: run-netns-cni\x2d67e6a552\x2d0e60\x2dcd3d\x2d5796\x2dfa105a770598.mount: Deactivated successfully. Mar 17 17:55:38.397387 containerd[1904]: time="2025-03-17T17:55:38.397341139Z" level=error msg="Failed to destroy network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:38.400067 containerd[1904]: time="2025-03-17T17:55:38.399854772Z" level=error msg="encountered an error cleaning up failed sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:38.400067 containerd[1904]: time="2025-03-17T17:55:38.399954122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:38.401870 kubelet[2362]: E0317 17:55:38.400478 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:38.401870 kubelet[2362]: E0317 17:55:38.400543 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:38.401870 kubelet[2362]: E0317 17:55:38.400565 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:38.401626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9-shm.mount: Deactivated successfully. Mar 17 17:55:38.402169 kubelet[2362]: E0317 17:55:38.400612 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:38.997522 kubelet[2362]: E0317 17:55:38.997477 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:39.300373 kubelet[2362]: I0317 17:55:39.299177 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9" Mar 17 17:55:39.300752 containerd[1904]: time="2025-03-17T17:55:39.300713153Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:39.301137 containerd[1904]: time="2025-03-17T17:55:39.300962182Z" level=info msg="Ensure that sandbox 49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9 in task-service has been cleanup successfully" Mar 17 17:55:39.303496 containerd[1904]: time="2025-03-17T17:55:39.303341086Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:39.303496 containerd[1904]: time="2025-03-17T17:55:39.303375658Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:39.303844 containerd[1904]: time="2025-03-17T17:55:39.303801608Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:39.304324 containerd[1904]: time="2025-03-17T17:55:39.303908909Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:39.304324 containerd[1904]: time="2025-03-17T17:55:39.303923925Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:39.304508 containerd[1904]: time="2025-03-17T17:55:39.304481337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:2,}" Mar 17 17:55:39.304941 systemd[1]: run-netns-cni\x2d9c6b349c\x2d40e6\x2da79e\x2d5ba8\x2d3446934afdfc.mount: Deactivated successfully. Mar 17 17:55:39.446215 containerd[1904]: time="2025-03-17T17:55:39.446165987Z" level=error msg="Failed to destroy network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:39.448414 containerd[1904]: time="2025-03-17T17:55:39.446693110Z" level=error msg="encountered an error cleaning up failed sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:39.448414 containerd[1904]: time="2025-03-17T17:55:39.446773345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:39.448763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60-shm.mount: Deactivated successfully. Mar 17 17:55:39.449688 kubelet[2362]: E0317 17:55:39.449009 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:39.449688 kubelet[2362]: E0317 17:55:39.449138 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:39.449688 kubelet[2362]: E0317 17:55:39.449227 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:39.452424 kubelet[2362]: E0317 17:55:39.452089 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:39.998164 kubelet[2362]: E0317 17:55:39.998125 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:40.304052 kubelet[2362]: I0317 17:55:40.303661 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60" Mar 17 17:55:40.304616 containerd[1904]: time="2025-03-17T17:55:40.304570723Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:55:40.307297 containerd[1904]: time="2025-03-17T17:55:40.306071642Z" level=info msg="Ensure that sandbox a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60 in task-service has been cleanup successfully" Mar 17 17:55:40.311679 containerd[1904]: time="2025-03-17T17:55:40.311627813Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:55:40.311840 containerd[1904]: time="2025-03-17T17:55:40.311821235Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:55:40.312948 systemd[1]: run-netns-cni\x2de7a72ce0\x2d5f87\x2d2b2c\x2d21ae\x2de448474de3bf.mount: Deactivated successfully. Mar 17 17:55:40.317158 containerd[1904]: time="2025-03-17T17:55:40.316657341Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:40.317158 containerd[1904]: time="2025-03-17T17:55:40.316796866Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:40.317158 containerd[1904]: time="2025-03-17T17:55:40.316814177Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:40.318649 containerd[1904]: time="2025-03-17T17:55:40.317695303Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:40.318649 containerd[1904]: time="2025-03-17T17:55:40.317822267Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:40.318649 containerd[1904]: time="2025-03-17T17:55:40.317838554Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:40.318975 containerd[1904]: time="2025-03-17T17:55:40.318929784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:3,}" Mar 17 17:55:40.489709 containerd[1904]: time="2025-03-17T17:55:40.489602153Z" level=error msg="Failed to destroy network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:40.490904 containerd[1904]: time="2025-03-17T17:55:40.490221611Z" level=error msg="encountered an error cleaning up failed sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:40.493292 containerd[1904]: time="2025-03-17T17:55:40.491140227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:40.493905 kubelet[2362]: E0317 17:55:40.493860 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:40.493997 kubelet[2362]: E0317 17:55:40.493934 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:40.493997 kubelet[2362]: E0317 17:55:40.493961 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:40.494081 kubelet[2362]: E0317 17:55:40.494011 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:40.495063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735-shm.mount: Deactivated successfully. Mar 17 17:55:40.998642 kubelet[2362]: E0317 17:55:40.998596 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:41.308001 kubelet[2362]: I0317 17:55:41.307896 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735" Mar 17 17:55:41.309088 containerd[1904]: time="2025-03-17T17:55:41.308986019Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:55:41.309569 containerd[1904]: time="2025-03-17T17:55:41.309331593Z" level=info msg="Ensure that sandbox 574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735 in task-service has been cleanup successfully" Mar 17 17:55:41.311578 containerd[1904]: time="2025-03-17T17:55:41.311545604Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:55:41.311682 containerd[1904]: time="2025-03-17T17:55:41.311578367Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:55:41.314243 containerd[1904]: time="2025-03-17T17:55:41.312967291Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:55:41.314243 containerd[1904]: time="2025-03-17T17:55:41.313076081Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:55:41.314243 containerd[1904]: time="2025-03-17T17:55:41.313090445Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:55:41.313886 systemd[1]: run-netns-cni\x2db67d7adc\x2d6a49\x2d94ee\x2d19bd\x2da0446d3ccf39.mount: Deactivated successfully. Mar 17 17:55:41.315177 containerd[1904]: time="2025-03-17T17:55:41.314958996Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:41.315177 containerd[1904]: time="2025-03-17T17:55:41.315136406Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:41.315177 containerd[1904]: time="2025-03-17T17:55:41.315154059Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:41.317011 containerd[1904]: time="2025-03-17T17:55:41.316983713Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:41.317116 containerd[1904]: time="2025-03-17T17:55:41.317085317Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:41.317165 containerd[1904]: time="2025-03-17T17:55:41.317112618Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:41.318775 containerd[1904]: time="2025-03-17T17:55:41.318414491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:4,}" Mar 17 17:55:41.456705 containerd[1904]: time="2025-03-17T17:55:41.456656045Z" level=error msg="Failed to destroy network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:41.459637 containerd[1904]: time="2025-03-17T17:55:41.459583969Z" level=error msg="encountered an error cleaning up failed sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:41.459776 containerd[1904]: time="2025-03-17T17:55:41.459681003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:41.462071 kubelet[2362]: E0317 17:55:41.460125 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:41.462071 kubelet[2362]: E0317 17:55:41.460399 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:41.462071 kubelet[2362]: E0317 17:55:41.460435 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:41.461527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68-shm.mount: Deactivated successfully. Mar 17 17:55:41.462941 kubelet[2362]: E0317 17:55:41.460483 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:41.982954 kubelet[2362]: E0317 17:55:41.982904 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:41.999826 kubelet[2362]: E0317 17:55:41.999743 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:42.315383 kubelet[2362]: I0317 17:55:42.314931 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68" Mar 17 17:55:42.316504 containerd[1904]: time="2025-03-17T17:55:42.316399913Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:55:42.318481 containerd[1904]: time="2025-03-17T17:55:42.318440606Z" level=info msg="Ensure that sandbox 2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68 in task-service has been cleanup successfully" Mar 17 17:55:42.321404 containerd[1904]: time="2025-03-17T17:55:42.321358135Z" level=info msg="TearDown network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" successfully" Mar 17 17:55:42.321983 containerd[1904]: time="2025-03-17T17:55:42.321866882Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" returns successfully" Mar 17 17:55:42.321930 systemd[1]: run-netns-cni\x2d8517778e\x2de45d\x2ddc4a\x2dd9ac\x2d0aa30b9a29a3.mount: Deactivated successfully. Mar 17 17:55:42.325970 containerd[1904]: time="2025-03-17T17:55:42.325547010Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:55:42.325970 containerd[1904]: time="2025-03-17T17:55:42.325741500Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:55:42.325970 containerd[1904]: time="2025-03-17T17:55:42.325788849Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:55:42.327424 containerd[1904]: time="2025-03-17T17:55:42.327396126Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:55:42.327741 containerd[1904]: time="2025-03-17T17:55:42.327705272Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:55:42.327741 containerd[1904]: time="2025-03-17T17:55:42.327727877Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:55:42.328486 containerd[1904]: time="2025-03-17T17:55:42.328460998Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:42.328764 containerd[1904]: time="2025-03-17T17:55:42.328563652Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:42.328764 containerd[1904]: time="2025-03-17T17:55:42.328581722Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:42.330225 containerd[1904]: time="2025-03-17T17:55:42.330008246Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:42.330701 containerd[1904]: time="2025-03-17T17:55:42.330489358Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:42.330701 containerd[1904]: time="2025-03-17T17:55:42.330511072Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:42.331340 containerd[1904]: time="2025-03-17T17:55:42.331176958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:5,}" Mar 17 17:55:42.556657 containerd[1904]: time="2025-03-17T17:55:42.556606431Z" level=error msg="Failed to destroy network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:42.559305 containerd[1904]: time="2025-03-17T17:55:42.557301655Z" level=error msg="encountered an error cleaning up failed sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:42.559305 containerd[1904]: time="2025-03-17T17:55:42.557385858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:42.559448 kubelet[2362]: E0317 17:55:42.557621 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:42.559448 kubelet[2362]: E0317 17:55:42.557693 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:42.559448 kubelet[2362]: E0317 17:55:42.557720 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:42.559543 kubelet[2362]: E0317 17:55:42.557770 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:42.560982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f-shm.mount: Deactivated successfully. Mar 17 17:55:42.699648 systemd[1]: Created slice kubepods-besteffort-podf32d7416_7490_49db_a6bd_24969566da5a.slice - libcontainer container kubepods-besteffort-podf32d7416_7490_49db_a6bd_24969566da5a.slice. Mar 17 17:55:42.818840 kubelet[2362]: I0317 17:55:42.818803 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqhs\" (UniqueName: \"kubernetes.io/projected/f32d7416-7490-49db-a6bd-24969566da5a-kube-api-access-ccqhs\") pod \"nginx-deployment-8587fbcb89-mq7bc\" (UID: \"f32d7416-7490-49db-a6bd-24969566da5a\") " pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:43.000617 kubelet[2362]: E0317 17:55:43.000220 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:43.004454 containerd[1904]: time="2025-03-17T17:55:43.004407659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:0,}" Mar 17 17:55:43.209790 containerd[1904]: time="2025-03-17T17:55:43.209737273Z" level=error msg="Failed to destroy network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.210481 containerd[1904]: time="2025-03-17T17:55:43.210410459Z" level=error msg="encountered an error cleaning up failed sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.210581 containerd[1904]: time="2025-03-17T17:55:43.210531219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.210833 kubelet[2362]: E0317 17:55:43.210777 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.210962 kubelet[2362]: E0317 17:55:43.210841 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:43.210962 kubelet[2362]: E0317 17:55:43.210871 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:43.210962 kubelet[2362]: E0317 17:55:43.210929 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mq7bc_default(f32d7416-7490-49db-a6bd-24969566da5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mq7bc_default(f32d7416-7490-49db-a6bd-24969566da5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mq7bc" podUID="f32d7416-7490-49db-a6bd-24969566da5a" Mar 17 17:55:43.329171 kubelet[2362]: I0317 17:55:43.329073 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd" Mar 17 17:55:43.332242 containerd[1904]: time="2025-03-17T17:55:43.332202179Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" Mar 17 17:55:43.333245 containerd[1904]: time="2025-03-17T17:55:43.333099597Z" level=info msg="Ensure that sandbox 765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd in task-service has been cleanup successfully" Mar 17 17:55:43.333441 containerd[1904]: time="2025-03-17T17:55:43.333346452Z" level=info msg="TearDown network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" successfully" Mar 17 17:55:43.333441 containerd[1904]: time="2025-03-17T17:55:43.333367577Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" returns successfully" Mar 17 17:55:43.338186 containerd[1904]: time="2025-03-17T17:55:43.337174411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:1,}" Mar 17 17:55:43.338152 systemd[1]: run-netns-cni\x2dc4e2814b\x2da14d\x2d03bd\x2dafc9\x2dba26d059ba96.mount: Deactivated successfully. Mar 17 17:55:43.341558 kubelet[2362]: I0317 17:55:43.341061 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f" Mar 17 17:55:43.343163 containerd[1904]: time="2025-03-17T17:55:43.343128959Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" Mar 17 17:55:43.348440 containerd[1904]: time="2025-03-17T17:55:43.343590017Z" level=info msg="Ensure that sandbox 04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f in task-service has been cleanup successfully" Mar 17 17:55:43.348440 containerd[1904]: time="2025-03-17T17:55:43.343883573Z" level=info msg="TearDown network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" successfully" Mar 17 17:55:43.348440 containerd[1904]: time="2025-03-17T17:55:43.343922263Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" returns successfully" Mar 17 17:55:43.351245 containerd[1904]: time="2025-03-17T17:55:43.350107442Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:55:43.351245 containerd[1904]: time="2025-03-17T17:55:43.350225521Z" level=info msg="TearDown network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" successfully" Mar 17 17:55:43.351245 containerd[1904]: time="2025-03-17T17:55:43.350242842Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" returns successfully" Mar 17 17:55:43.351952 containerd[1904]: time="2025-03-17T17:55:43.351838527Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:55:43.353477 containerd[1904]: time="2025-03-17T17:55:43.351955074Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:55:43.353477 containerd[1904]: time="2025-03-17T17:55:43.351969726Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:55:43.352127 systemd[1]: run-netns-cni\x2dc8b595aa\x2d5f6a\x2dd363\x2dfd5c\x2d5f71fb6b6b2a.mount: Deactivated successfully. Mar 17 17:55:43.353836 containerd[1904]: time="2025-03-17T17:55:43.353639621Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:55:43.353941 containerd[1904]: time="2025-03-17T17:55:43.353871901Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:55:43.354032 containerd[1904]: time="2025-03-17T17:55:43.353946383Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:55:43.356762 containerd[1904]: time="2025-03-17T17:55:43.355319955Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:43.356762 containerd[1904]: time="2025-03-17T17:55:43.355452920Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:43.356762 containerd[1904]: time="2025-03-17T17:55:43.355469012Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:43.361013 containerd[1904]: time="2025-03-17T17:55:43.360316281Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:43.361318 containerd[1904]: time="2025-03-17T17:55:43.361297626Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:43.364480 containerd[1904]: time="2025-03-17T17:55:43.364446297Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:43.368527 containerd[1904]: time="2025-03-17T17:55:43.368435203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:6,}" Mar 17 17:55:43.584416 containerd[1904]: time="2025-03-17T17:55:43.582814626Z" level=error msg="Failed to destroy network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.584416 containerd[1904]: time="2025-03-17T17:55:43.583253303Z" level=error msg="encountered an error cleaning up failed sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.584416 containerd[1904]: time="2025-03-17T17:55:43.583443349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.584692 kubelet[2362]: E0317 17:55:43.583689 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.584692 kubelet[2362]: E0317 17:55:43.583759 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:43.584692 kubelet[2362]: E0317 17:55:43.583787 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:43.584874 kubelet[2362]: E0317 17:55:43.583844 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mq7bc_default(f32d7416-7490-49db-a6bd-24969566da5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mq7bc_default(f32d7416-7490-49db-a6bd-24969566da5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mq7bc" podUID="f32d7416-7490-49db-a6bd-24969566da5a" Mar 17 17:55:43.608746 containerd[1904]: time="2025-03-17T17:55:43.608516634Z" level=error msg="Failed to destroy network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.609550 containerd[1904]: time="2025-03-17T17:55:43.609503583Z" level=error msg="encountered an error cleaning up failed sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.609656 containerd[1904]: time="2025-03-17T17:55:43.609581923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.610496 kubelet[2362]: E0317 17:55:43.609931 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:43.610496 kubelet[2362]: E0317 17:55:43.610003 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:43.610496 kubelet[2362]: E0317 17:55:43.610029 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:43.610675 kubelet[2362]: E0317 17:55:43.610087 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:44.000385 kubelet[2362]: E0317 17:55:44.000340 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:44.322333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb-shm.mount: Deactivated successfully. Mar 17 17:55:44.322823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0-shm.mount: Deactivated successfully. Mar 17 17:55:44.353402 kubelet[2362]: I0317 17:55:44.352068 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0" Mar 17 17:55:44.355045 containerd[1904]: time="2025-03-17T17:55:44.354580695Z" level=info msg="StopPodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\"" Mar 17 17:55:44.355045 containerd[1904]: time="2025-03-17T17:55:44.354855019Z" level=info msg="Ensure that sandbox 539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0 in task-service has been cleanup successfully" Mar 17 17:55:44.358383 containerd[1904]: time="2025-03-17T17:55:44.358341740Z" level=info msg="TearDown network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" successfully" Mar 17 17:55:44.358537 containerd[1904]: time="2025-03-17T17:55:44.358518666Z" level=info msg="StopPodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" returns successfully" Mar 17 17:55:44.360030 containerd[1904]: time="2025-03-17T17:55:44.359589331Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" Mar 17 17:55:44.360030 containerd[1904]: time="2025-03-17T17:55:44.359693335Z" level=info msg="TearDown network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" successfully" Mar 17 17:55:44.360030 containerd[1904]: time="2025-03-17T17:55:44.359707506Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" returns successfully" Mar 17 17:55:44.359970 systemd[1]: run-netns-cni\x2d040c1617\x2dfc32\x2d6d39\x2dd5da\x2d98eee95838e5.mount: Deactivated successfully. Mar 17 17:55:44.361696 containerd[1904]: time="2025-03-17T17:55:44.360946923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:2,}" Mar 17 17:55:44.378326 kubelet[2362]: I0317 17:55:44.377603 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb" Mar 17 17:55:44.378486 containerd[1904]: time="2025-03-17T17:55:44.378432649Z" level=info msg="StopPodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\"" Mar 17 17:55:44.378822 containerd[1904]: time="2025-03-17T17:55:44.378759124Z" level=info msg="Ensure that sandbox febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb in task-service has been cleanup successfully" Mar 17 17:55:44.379861 containerd[1904]: time="2025-03-17T17:55:44.379213402Z" level=info msg="TearDown network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" successfully" Mar 17 17:55:44.379861 containerd[1904]: time="2025-03-17T17:55:44.379237886Z" level=info msg="StopPodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" returns successfully" Mar 17 17:55:44.383042 containerd[1904]: time="2025-03-17T17:55:44.381919306Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" Mar 17 17:55:44.383042 containerd[1904]: time="2025-03-17T17:55:44.382087752Z" level=info msg="TearDown network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" successfully" Mar 17 17:55:44.383042 containerd[1904]: time="2025-03-17T17:55:44.382128753Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" returns successfully" Mar 17 17:55:44.383094 systemd[1]: run-netns-cni\x2d0a00e700\x2d3f5c\x2d753a\x2d77d9\x2dd489ff4854ee.mount: Deactivated successfully. Mar 17 17:55:44.383695 containerd[1904]: time="2025-03-17T17:55:44.383409390Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:55:44.383695 containerd[1904]: time="2025-03-17T17:55:44.383530570Z" level=info msg="TearDown network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" successfully" Mar 17 17:55:44.383695 containerd[1904]: time="2025-03-17T17:55:44.383546139Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" returns successfully" Mar 17 17:55:44.386644 containerd[1904]: time="2025-03-17T17:55:44.386609648Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:55:44.386749 containerd[1904]: time="2025-03-17T17:55:44.386721906Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:55:44.386749 containerd[1904]: time="2025-03-17T17:55:44.386736955Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:55:44.387709 containerd[1904]: time="2025-03-17T17:55:44.387676445Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:55:44.389128 containerd[1904]: time="2025-03-17T17:55:44.387804805Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:55:44.389128 containerd[1904]: time="2025-03-17T17:55:44.387817720Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:55:44.389128 containerd[1904]: time="2025-03-17T17:55:44.388761420Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:44.389128 containerd[1904]: time="2025-03-17T17:55:44.388903888Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:44.389128 containerd[1904]: time="2025-03-17T17:55:44.388923198Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:44.389932 containerd[1904]: time="2025-03-17T17:55:44.389603880Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:44.389932 containerd[1904]: time="2025-03-17T17:55:44.389786095Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:44.389932 containerd[1904]: time="2025-03-17T17:55:44.389869729Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:44.391501 containerd[1904]: time="2025-03-17T17:55:44.391470077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:7,}" Mar 17 17:55:44.585261 containerd[1904]: time="2025-03-17T17:55:44.584085093Z" level=error msg="Failed to destroy network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.585261 containerd[1904]: time="2025-03-17T17:55:44.584440467Z" level=error msg="encountered an error cleaning up failed sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.585261 containerd[1904]: time="2025-03-17T17:55:44.584513177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.585638 kubelet[2362]: E0317 17:55:44.585173 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.585638 kubelet[2362]: E0317 17:55:44.585436 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:44.585638 kubelet[2362]: E0317 17:55:44.585474 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mq7bc" Mar 17 17:55:44.586079 kubelet[2362]: E0317 17:55:44.585649 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mq7bc_default(f32d7416-7490-49db-a6bd-24969566da5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mq7bc_default(f32d7416-7490-49db-a6bd-24969566da5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mq7bc" podUID="f32d7416-7490-49db-a6bd-24969566da5a" Mar 17 17:55:44.603907 containerd[1904]: time="2025-03-17T17:55:44.603768993Z" level=error msg="Failed to destroy network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.604489 containerd[1904]: time="2025-03-17T17:55:44.604301883Z" level=error msg="encountered an error cleaning up failed sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.604489 containerd[1904]: time="2025-03-17T17:55:44.604381454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.604657 kubelet[2362]: E0317 17:55:44.604605 2362 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:55:44.604733 kubelet[2362]: E0317 17:55:44.604680 2362 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:44.604733 kubelet[2362]: E0317 17:55:44.604708 2362 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq588" Mar 17 17:55:44.604912 kubelet[2362]: E0317 17:55:44.604758 2362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq588_calico-system(1d1a937b-4e87-4260-a1b6-2ddb288ceef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq588" podUID="1d1a937b-4e87-4260-a1b6-2ddb288ceef3" Mar 17 17:55:44.793511 containerd[1904]: time="2025-03-17T17:55:44.793455655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:44.795727 containerd[1904]: time="2025-03-17T17:55:44.795671164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:55:44.797936 containerd[1904]: time="2025-03-17T17:55:44.797879328Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:44.801328 containerd[1904]: time="2025-03-17T17:55:44.801258253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:44.802300 containerd[1904]: time="2025-03-17T17:55:44.801877645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 7.506049843s" Mar 17 17:55:44.802300 containerd[1904]: time="2025-03-17T17:55:44.801917699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:55:44.811869 containerd[1904]: time="2025-03-17T17:55:44.811829044Z" level=info msg="CreateContainer within sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:55:44.848947 containerd[1904]: time="2025-03-17T17:55:44.848810921Z" level=info msg="CreateContainer within sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\"" Mar 17 17:55:44.849660 containerd[1904]: time="2025-03-17T17:55:44.849629235Z" level=info msg="StartContainer for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\"" Mar 17 17:55:44.975498 systemd[1]: Started cri-containerd-5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954.scope - libcontainer container 5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954. Mar 17 17:55:45.000718 kubelet[2362]: E0317 17:55:45.000507 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:45.020707 containerd[1904]: time="2025-03-17T17:55:45.019718042Z" level=info msg="StartContainer for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" returns successfully" Mar 17 17:55:45.186058 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:55:45.186248 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:55:45.334927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab-shm.mount: Deactivated successfully. Mar 17 17:55:45.335528 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba-shm.mount: Deactivated successfully. Mar 17 17:55:45.335740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511030247.mount: Deactivated successfully. Mar 17 17:55:45.387332 kubelet[2362]: I0317 17:55:45.386608 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab" Mar 17 17:55:45.390508 containerd[1904]: time="2025-03-17T17:55:45.388465248Z" level=info msg="StopPodSandbox for \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\"" Mar 17 17:55:45.392434 containerd[1904]: time="2025-03-17T17:55:45.391759551Z" level=info msg="Ensure that sandbox 6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab in task-service has been cleanup successfully" Mar 17 17:55:45.394567 containerd[1904]: time="2025-03-17T17:55:45.394148133Z" level=info msg="TearDown network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\" successfully" Mar 17 17:55:45.394567 containerd[1904]: time="2025-03-17T17:55:45.394178990Z" level=info msg="StopPodSandbox for \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\" returns successfully" Mar 17 17:55:45.396128 kubelet[2362]: I0317 17:55:45.395595 2362 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba" Mar 17 17:55:45.398163 containerd[1904]: time="2025-03-17T17:55:45.397143371Z" level=info msg="StopPodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\"" Mar 17 17:55:45.398163 containerd[1904]: time="2025-03-17T17:55:45.397394988Z" level=info msg="TearDown network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" successfully" Mar 17 17:55:45.398163 containerd[1904]: time="2025-03-17T17:55:45.397415941Z" level=info msg="StopPodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" returns successfully" Mar 17 17:55:45.398163 containerd[1904]: time="2025-03-17T17:55:45.397588143Z" level=info msg="StopPodSandbox for \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\"" Mar 17 17:55:45.398163 containerd[1904]: time="2025-03-17T17:55:45.397800019Z" level=info msg="Ensure that sandbox cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba in task-service has been cleanup successfully" Mar 17 17:55:45.398711 containerd[1904]: time="2025-03-17T17:55:45.398674701Z" level=info msg="TearDown network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\" successfully" Mar 17 17:55:45.398820 containerd[1904]: time="2025-03-17T17:55:45.398801351Z" level=info msg="StopPodSandbox for \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\" returns successfully" Mar 17 17:55:45.400434 containerd[1904]: time="2025-03-17T17:55:45.400107332Z" level=info msg="StopPodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\"" Mar 17 17:55:45.401199 containerd[1904]: time="2025-03-17T17:55:45.400955982Z" level=info msg="TearDown network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" successfully" Mar 17 17:55:45.401199 containerd[1904]: time="2025-03-17T17:55:45.400977563Z" level=info msg="StopPodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" returns successfully" Mar 17 17:55:45.401199 containerd[1904]: time="2025-03-17T17:55:45.401059839Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" Mar 17 17:55:45.401199 containerd[1904]: time="2025-03-17T17:55:45.401143284Z" level=info msg="TearDown network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" successfully" Mar 17 17:55:45.401199 containerd[1904]: time="2025-03-17T17:55:45.401156903Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" returns successfully" Mar 17 17:55:45.401821 containerd[1904]: time="2025-03-17T17:55:45.401367932Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" Mar 17 17:55:45.401821 containerd[1904]: time="2025-03-17T17:55:45.401450752Z" level=info msg="TearDown network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" successfully" Mar 17 17:55:45.401821 containerd[1904]: time="2025-03-17T17:55:45.401464989Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" returns successfully" Mar 17 17:55:45.402322 systemd[1]: run-netns-cni\x2d6b566e4c\x2d278c\x2d339f\x2d76c5\x2d32ab819621e4.mount: Deactivated successfully. Mar 17 17:55:45.403657 containerd[1904]: time="2025-03-17T17:55:45.402493441Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:55:45.403657 containerd[1904]: time="2025-03-17T17:55:45.402748235Z" level=info msg="TearDown network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" successfully" Mar 17 17:55:45.403657 containerd[1904]: time="2025-03-17T17:55:45.402770024Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" returns successfully" Mar 17 17:55:45.405526 containerd[1904]: time="2025-03-17T17:55:45.404817232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:3,}" Mar 17 17:55:45.410861 containerd[1904]: time="2025-03-17T17:55:45.410479823Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:55:45.410861 containerd[1904]: time="2025-03-17T17:55:45.410603984Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:55:45.410861 containerd[1904]: time="2025-03-17T17:55:45.410622977Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:55:45.412080 containerd[1904]: time="2025-03-17T17:55:45.411954732Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:55:45.412080 containerd[1904]: time="2025-03-17T17:55:45.412060836Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:55:45.412080 containerd[1904]: time="2025-03-17T17:55:45.412077803Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:55:45.416884 containerd[1904]: time="2025-03-17T17:55:45.413835703Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:55:45.416884 containerd[1904]: time="2025-03-17T17:55:45.414045369Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:55:45.416884 containerd[1904]: time="2025-03-17T17:55:45.414063550Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:55:45.414025 systemd[1]: run-netns-cni\x2d7189b0d4\x2d71d9\x2dd223\x2db526\x2db5438ba0e235.mount: Deactivated successfully. Mar 17 17:55:45.419448 containerd[1904]: time="2025-03-17T17:55:45.419158308Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:55:45.419448 containerd[1904]: time="2025-03-17T17:55:45.419324371Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:55:45.419448 containerd[1904]: time="2025-03-17T17:55:45.419342061Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:55:45.421824 containerd[1904]: time="2025-03-17T17:55:45.421779098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:8,}" Mar 17 17:55:45.507556 kubelet[2362]: I0317 17:55:45.507139 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tz4vw" podStartSLOduration=4.416041579 podStartE2EDuration="23.507117357s" podCreationTimestamp="2025-03-17 17:55:22 +0000 UTC" firstStartedPulling="2025-03-17 17:55:25.712381611 +0000 UTC m=+4.679651094" lastFinishedPulling="2025-03-17 17:55:44.803457392 +0000 UTC m=+23.770726872" observedRunningTime="2025-03-17 17:55:45.49636051 +0000 UTC m=+24.463630008" watchObservedRunningTime="2025-03-17 17:55:45.507117357 +0000 UTC m=+24.474386854" Mar 17 17:55:46.001109 kubelet[2362]: E0317 17:55:46.001066 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:46.546464 (udev-worker)[3268]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:55:46.548617 systemd-networkd[1743]: calidf4f182cf1b: Link UP Mar 17 17:55:46.549400 systemd-networkd[1743]: calidf4f182cf1b: Gained carrier Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:45.538 [INFO][3292] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:45.849 [INFO][3292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.100-k8s-csi--node--driver--zq588-eth0 csi-node-driver- calico-system 1d1a937b-4e87-4260-a1b6-2ddb288ceef3 754 0 2025-03-17 17:55:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.26.100 csi-node-driver-zq588 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidf4f182cf1b [] []}} ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:45.856 [INFO][3292] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.031 [INFO][3308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" HandleID="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Workload="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.217 [INFO][3308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" HandleID="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Workload="172.31.26.100-k8s-csi--node--driver--zq588-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b700), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.26.100", "pod":"csi-node-driver-zq588", "timestamp":"2025-03-17 17:55:46.025042835 +0000 UTC"}, Hostname:"172.31.26.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.218 [INFO][3308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.218 [INFO][3308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.219 [INFO][3308] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.100' Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.305 [INFO][3308] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.337 [INFO][3308] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.354 [INFO][3308] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.377 [INFO][3308] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.389 [INFO][3308] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.389 [INFO][3308] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.413 [INFO][3308] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.432 [INFO][3308] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.495 [INFO][3308] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.129/26] block=192.168.70.128/26 handle="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.495 [INFO][3308] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.129/26] handle="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" host="172.31.26.100" Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.495 [INFO][3308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:55:46.623175 containerd[1904]: 2025-03-17 17:55:46.495 [INFO][3308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.129/26] IPv6=[] ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" HandleID="k8s-pod-network.f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Workload="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.624319 containerd[1904]: 2025-03-17 17:55:46.501 [INFO][3292] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-csi--node--driver--zq588-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d1a937b-4e87-4260-a1b6-2ddb288ceef3", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 55, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"", Pod:"csi-node-driver-zq588", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf4f182cf1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:55:46.624319 containerd[1904]: 2025-03-17 17:55:46.502 [INFO][3292] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.129/32] ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.624319 containerd[1904]: 2025-03-17 17:55:46.502 [INFO][3292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf4f182cf1b ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.624319 containerd[1904]: 2025-03-17 17:55:46.550 [INFO][3292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.624319 containerd[1904]: 2025-03-17 17:55:46.550 [INFO][3292] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-csi--node--driver--zq588-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d1a937b-4e87-4260-a1b6-2ddb288ceef3", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 55, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c", Pod:"csi-node-driver-zq588", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf4f182cf1b", MAC:"d2:04:ba:3b:ff:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:55:46.624319 containerd[1904]: 2025-03-17 17:55:46.621 [INFO][3292] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c" Namespace="calico-system" Pod="csi-node-driver-zq588" WorkloadEndpoint="172.31.26.100-k8s-csi--node--driver--zq588-eth0" Mar 17 17:55:46.648047 containerd[1904]: time="2025-03-17T17:55:46.647715964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:55:46.648047 containerd[1904]: time="2025-03-17T17:55:46.647774076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:55:46.648047 containerd[1904]: time="2025-03-17T17:55:46.647795407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:46.648047 containerd[1904]: time="2025-03-17T17:55:46.647890423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:46.683532 systemd[1]: Started cri-containerd-f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c.scope - libcontainer container f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c. Mar 17 17:55:46.694617 systemd-networkd[1743]: calia6b8846d645: Link UP Mar 17 17:55:46.694844 systemd-networkd[1743]: calia6b8846d645: Gained carrier Mar 17 17:55:46.729343 containerd[1904]: time="2025-03-17T17:55:46.729301648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq588,Uid:1d1a937b-4e87-4260-a1b6-2ddb288ceef3,Namespace:calico-system,Attempt:8,} returns sandbox id \"f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c\"" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:45.510 [INFO][3282] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:45.849 [INFO][3282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0 nginx-deployment-8587fbcb89- default f32d7416-7490-49db-a6bd-24969566da5a 931 0 2025-03-17 17:55:42 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.100 nginx-deployment-8587fbcb89-mq7bc eth0 default [] [] [kns.default ksa.default.default] calia6b8846d645 [] []}} ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:45.856 [INFO][3282] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.050 [INFO][3306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" HandleID="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Workload="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.220 [INFO][3306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" HandleID="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Workload="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003372b0), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.100", "pod":"nginx-deployment-8587fbcb89-mq7bc", "timestamp":"2025-03-17 17:55:46.050308655 +0000 UTC"}, Hostname:"172.31.26.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.220 [INFO][3306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.495 [INFO][3306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.496 [INFO][3306] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.100' Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.508 [INFO][3306] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.541 [INFO][3306] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.609 [INFO][3306] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.622 [INFO][3306] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.629 [INFO][3306] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.629 [INFO][3306] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.633 [INFO][3306] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324 Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.644 [INFO][3306] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.685 [INFO][3306] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.130/26] block=192.168.70.128/26 handle="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.685 [INFO][3306] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.130/26] handle="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" host="172.31.26.100" Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.685 [INFO][3306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:55:46.730792 containerd[1904]: 2025-03-17 17:55:46.685 [INFO][3306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.130/26] IPv6=[] ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" HandleID="k8s-pod-network.e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Workload="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.734295 containerd[1904]: 2025-03-17 17:55:46.688 [INFO][3282] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f32d7416-7490-49db-a6bd-24969566da5a", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-mq7bc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia6b8846d645", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:55:46.734295 containerd[1904]: 2025-03-17 17:55:46.688 [INFO][3282] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.130/32] ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.734295 containerd[1904]: 2025-03-17 17:55:46.688 [INFO][3282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6b8846d645 ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.734295 containerd[1904]: 2025-03-17 17:55:46.692 [INFO][3282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.734295 containerd[1904]: 2025-03-17 17:55:46.692 [INFO][3282] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f32d7416-7490-49db-a6bd-24969566da5a", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324", Pod:"nginx-deployment-8587fbcb89-mq7bc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia6b8846d645", MAC:"be:98:03:d6:74:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:55:46.734295 containerd[1904]: 2025-03-17 17:55:46.725 [INFO][3282] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324" Namespace="default" Pod="nginx-deployment-8587fbcb89-mq7bc" WorkloadEndpoint="172.31.26.100-k8s-nginx--deployment--8587fbcb89--mq7bc-eth0" Mar 17 17:55:46.735530 containerd[1904]: time="2025-03-17T17:55:46.735062624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:55:46.769759 containerd[1904]: time="2025-03-17T17:55:46.769422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:55:46.769759 containerd[1904]: time="2025-03-17T17:55:46.769484502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:55:46.769759 containerd[1904]: time="2025-03-17T17:55:46.769499285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:46.769759 containerd[1904]: time="2025-03-17T17:55:46.769578738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:55:46.800505 systemd[1]: Started cri-containerd-e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324.scope - libcontainer container e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324. Mar 17 17:55:46.849893 containerd[1904]: time="2025-03-17T17:55:46.849847032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mq7bc,Uid:f32d7416-7490-49db-a6bd-24969566da5a,Namespace:default,Attempt:3,} returns sandbox id \"e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324\"" Mar 17 17:55:47.001741 kubelet[2362]: E0317 17:55:47.001662 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:47.580299 kernel: bpftool[3628]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:55:47.932233 systemd-networkd[1743]: vxlan.calico: Link UP Mar 17 17:55:47.932243 systemd-networkd[1743]: vxlan.calico: Gained carrier Mar 17 17:55:47.947196 systemd-networkd[1743]: calidf4f182cf1b: Gained IPv6LL Mar 17 17:55:47.960757 (udev-worker)[3402]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:55:48.001897 kubelet[2362]: E0317 17:55:48.001854 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:48.264435 systemd-networkd[1743]: calia6b8846d645: Gained IPv6LL Mar 17 17:55:48.619328 containerd[1904]: time="2025-03-17T17:55:48.619173222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:48.620747 containerd[1904]: time="2025-03-17T17:55:48.620598297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:55:48.622040 containerd[1904]: time="2025-03-17T17:55:48.621845163Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:48.624624 containerd[1904]: time="2025-03-17T17:55:48.624587974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:48.625206 containerd[1904]: time="2025-03-17T17:55:48.625167816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.889851933s" Mar 17 17:55:48.625300 containerd[1904]: time="2025-03-17T17:55:48.625210585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:55:48.626693 containerd[1904]: time="2025-03-17T17:55:48.626670332Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:55:48.627867 containerd[1904]: time="2025-03-17T17:55:48.627838429Z" level=info msg="CreateContainer within sandbox \"f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:55:48.648893 containerd[1904]: time="2025-03-17T17:55:48.648844151Z" level=info msg="CreateContainer within sandbox \"f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b4a5392d88029a29e4f86df53662d943fdabe0e877c59622f6fc9b6d2301a5f\"" Mar 17 17:55:48.649417 containerd[1904]: time="2025-03-17T17:55:48.649390911Z" level=info msg="StartContainer for \"8b4a5392d88029a29e4f86df53662d943fdabe0e877c59622f6fc9b6d2301a5f\"" Mar 17 17:55:48.686499 systemd[1]: Started cri-containerd-8b4a5392d88029a29e4f86df53662d943fdabe0e877c59622f6fc9b6d2301a5f.scope - libcontainer container 8b4a5392d88029a29e4f86df53662d943fdabe0e877c59622f6fc9b6d2301a5f. Mar 17 17:55:48.720717 containerd[1904]: time="2025-03-17T17:55:48.720072737Z" level=info msg="StartContainer for \"8b4a5392d88029a29e4f86df53662d943fdabe0e877c59622f6fc9b6d2301a5f\" returns successfully" Mar 17 17:55:49.002698 kubelet[2362]: E0317 17:55:49.002658 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:49.607590 systemd-networkd[1743]: vxlan.calico: Gained IPv6LL Mar 17 17:55:50.009321 kubelet[2362]: E0317 17:55:50.004591 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:51.005703 kubelet[2362]: E0317 17:55:51.005407 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:51.632832 ntpd[1877]: Listen normally on 7 vxlan.calico 192.168.70.128:123 Mar 17 17:55:51.632922 ntpd[1877]: Listen normally on 8 calidf4f182cf1b [fe80::ecee:eeff:feee:eeee%3]:123 Mar 17 17:55:51.634676 ntpd[1877]: 17 Mar 17:55:51 ntpd[1877]: Listen normally on 7 vxlan.calico 192.168.70.128:123 Mar 17 17:55:51.634676 ntpd[1877]: 17 Mar 17:55:51 ntpd[1877]: Listen normally on 8 calidf4f182cf1b [fe80::ecee:eeff:feee:eeee%3]:123 Mar 17 17:55:51.634676 ntpd[1877]: 17 Mar 17:55:51 ntpd[1877]: Listen normally on 9 calia6b8846d645 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 17 17:55:51.634676 ntpd[1877]: 17 Mar 17:55:51 ntpd[1877]: Listen normally on 10 vxlan.calico [fe80::643e:99ff:feb3:8892%5]:123 Mar 17 17:55:51.632978 ntpd[1877]: Listen normally on 9 calia6b8846d645 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 17 17:55:51.633029 ntpd[1877]: Listen normally on 10 vxlan.calico [fe80::643e:99ff:feb3:8892%5]:123 Mar 17 17:55:51.987461 update_engine[1885]: I20250317 17:55:51.987397 1885 update_attempter.cc:509] Updating boot flags... Mar 17 17:55:52.010367 kubelet[2362]: E0317 17:55:52.006599 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:52.123299 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3774) Mar 17 17:55:52.199983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213606037.mount: Deactivated successfully. Mar 17 17:55:52.448400 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3765) Mar 17 17:55:53.007431 kubelet[2362]: E0317 17:55:53.007390 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:54.011560 kubelet[2362]: E0317 17:55:54.008315 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:54.432307 containerd[1904]: time="2025-03-17T17:55:54.432164792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:54.434513 containerd[1904]: time="2025-03-17T17:55:54.434301756Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 17:55:54.438306 containerd[1904]: time="2025-03-17T17:55:54.436764296Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:54.440839 containerd[1904]: time="2025-03-17T17:55:54.440762689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:54.442102 containerd[1904]: time="2025-03-17T17:55:54.442060413Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 5.815279249s" Mar 17 17:55:54.442289 containerd[1904]: time="2025-03-17T17:55:54.442247402Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:55:54.449776 containerd[1904]: time="2025-03-17T17:55:54.449732077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:55:54.466902 containerd[1904]: time="2025-03-17T17:55:54.466853222Z" level=info msg="CreateContainer within sandbox \"e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:55:54.501685 containerd[1904]: time="2025-03-17T17:55:54.501640102Z" level=info msg="CreateContainer within sandbox \"e07d60cc35c9de86d8d98a85107eae94b6bb5eaeeaad7b1e5ed7cf5f3147e324\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e898cd1301a85b4db3ccde82d869676f523f5cb867d686f1c0a86615edd4cfaa\"" Mar 17 17:55:54.504835 containerd[1904]: time="2025-03-17T17:55:54.502589089Z" level=info msg="StartContainer for \"e898cd1301a85b4db3ccde82d869676f523f5cb867d686f1c0a86615edd4cfaa\"" Mar 17 17:55:54.619833 systemd[1]: Started cri-containerd-e898cd1301a85b4db3ccde82d869676f523f5cb867d686f1c0a86615edd4cfaa.scope - libcontainer container e898cd1301a85b4db3ccde82d869676f523f5cb867d686f1c0a86615edd4cfaa. Mar 17 17:55:54.693017 containerd[1904]: time="2025-03-17T17:55:54.692896286Z" level=info msg="StartContainer for \"e898cd1301a85b4db3ccde82d869676f523f5cb867d686f1c0a86615edd4cfaa\" returns successfully" Mar 17 17:55:55.012327 kubelet[2362]: E0317 17:55:55.012257 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:55.550624 kubelet[2362]: I0317 17:55:55.550554 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mq7bc" podStartSLOduration=5.947328767 podStartE2EDuration="13.545480641s" podCreationTimestamp="2025-03-17 17:55:42 +0000 UTC" firstStartedPulling="2025-03-17 17:55:46.851223876 +0000 UTC m=+25.818493354" lastFinishedPulling="2025-03-17 17:55:54.44937574 +0000 UTC m=+33.416645228" observedRunningTime="2025-03-17 17:55:55.543241189 +0000 UTC m=+34.510510685" watchObservedRunningTime="2025-03-17 17:55:55.545480641 +0000 UTC m=+34.512750143" Mar 17 17:55:56.012696 kubelet[2362]: E0317 17:55:56.012649 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:56.436714 containerd[1904]: time="2025-03-17T17:55:56.436592558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:56.438074 containerd[1904]: time="2025-03-17T17:55:56.437898161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:55:56.440403 containerd[1904]: time="2025-03-17T17:55:56.438988549Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:56.442215 containerd[1904]: time="2025-03-17T17:55:56.441295958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:56.442215 containerd[1904]: time="2025-03-17T17:55:56.441976239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.992173661s" Mar 17 17:55:56.442215 containerd[1904]: time="2025-03-17T17:55:56.442012849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:55:56.455496 containerd[1904]: time="2025-03-17T17:55:56.455446760Z" level=info msg="CreateContainer within sandbox \"f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:55:56.475875 containerd[1904]: time="2025-03-17T17:55:56.475833811Z" level=info msg="CreateContainer within sandbox \"f3fe001c3a047238e6fbecdd11d02be26cc84889c395c62c8abbcfedabfaf45c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"43ae11340b0771da3047163a5ab2150c4d45f1d7dd398caede458562ef50fc98\"" Mar 17 17:55:56.476498 containerd[1904]: time="2025-03-17T17:55:56.476466010Z" level=info msg="StartContainer for \"43ae11340b0771da3047163a5ab2150c4d45f1d7dd398caede458562ef50fc98\"" Mar 17 17:55:56.529868 systemd[1]: Started cri-containerd-43ae11340b0771da3047163a5ab2150c4d45f1d7dd398caede458562ef50fc98.scope - libcontainer container 43ae11340b0771da3047163a5ab2150c4d45f1d7dd398caede458562ef50fc98. Mar 17 17:55:56.568599 containerd[1904]: time="2025-03-17T17:55:56.568551593Z" level=info msg="StartContainer for \"43ae11340b0771da3047163a5ab2150c4d45f1d7dd398caede458562ef50fc98\" returns successfully" Mar 17 17:55:57.015928 kubelet[2362]: E0317 17:55:57.015854 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:57.236740 kubelet[2362]: I0317 17:55:57.236694 2362 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:55:57.236740 kubelet[2362]: I0317 17:55:57.236753 2362 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:55:58.016024 kubelet[2362]: E0317 17:55:58.015973 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:55:59.016367 kubelet[2362]: E0317 17:55:59.016264 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:00.017519 kubelet[2362]: E0317 17:56:00.017468 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:01.018203 kubelet[2362]: E0317 17:56:01.018144 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:01.983874 kubelet[2362]: E0317 17:56:01.983699 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:02.018343 kubelet[2362]: E0317 17:56:02.018298 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:03.019154 kubelet[2362]: E0317 17:56:03.018973 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:04.020194 kubelet[2362]: E0317 17:56:04.020094 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:05.020360 kubelet[2362]: E0317 17:56:05.020295 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:06.021213 kubelet[2362]: E0317 17:56:06.021154 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:07.022417 kubelet[2362]: E0317 17:56:07.022180 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:08.023722 kubelet[2362]: E0317 17:56:08.023598 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:09.024047 kubelet[2362]: E0317 17:56:09.024003 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:10.024468 kubelet[2362]: E0317 17:56:10.024423 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:10.386957 kubelet[2362]: I0317 17:56:10.386805 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zq588" podStartSLOduration=38.67699087 podStartE2EDuration="48.386789355s" podCreationTimestamp="2025-03-17 17:55:22 +0000 UTC" firstStartedPulling="2025-03-17 17:55:46.733372233 +0000 UTC m=+25.700641721" lastFinishedPulling="2025-03-17 17:55:56.443170718 +0000 UTC m=+35.410440206" observedRunningTime="2025-03-17 17:55:57.631629867 +0000 UTC m=+36.598899364" watchObservedRunningTime="2025-03-17 17:56:10.386789355 +0000 UTC m=+49.354058852" Mar 17 17:56:10.401237 systemd[1]: Created slice kubepods-besteffort-pod83e8333a_e096_40b7_9ce2_4181fefe48e6.slice - libcontainer container kubepods-besteffort-pod83e8333a_e096_40b7_9ce2_4181fefe48e6.slice. Mar 17 17:56:10.572433 kubelet[2362]: I0317 17:56:10.572381 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/83e8333a-e096-40b7-9ce2-4181fefe48e6-data\") pod \"nfs-server-provisioner-0\" (UID: \"83e8333a-e096-40b7-9ce2-4181fefe48e6\") " pod="default/nfs-server-provisioner-0" Mar 17 17:56:10.572583 kubelet[2362]: I0317 17:56:10.572460 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxq2b\" (UniqueName: \"kubernetes.io/projected/83e8333a-e096-40b7-9ce2-4181fefe48e6-kube-api-access-cxq2b\") pod \"nfs-server-provisioner-0\" (UID: \"83e8333a-e096-40b7-9ce2-4181fefe48e6\") " pod="default/nfs-server-provisioner-0" Mar 17 17:56:11.006644 containerd[1904]: time="2025-03-17T17:56:11.006251238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:83e8333a-e096-40b7-9ce2-4181fefe48e6,Namespace:default,Attempt:0,}" Mar 17 17:56:11.025531 kubelet[2362]: E0317 17:56:11.025481 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:11.467690 systemd-networkd[1743]: cali60e51b789ff: Link UP Mar 17 17:56:11.469702 systemd-networkd[1743]: cali60e51b789ff: Gained carrier Mar 17 17:56:11.480399 (udev-worker)[4113]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.105 [INFO][4096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.100-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 83e8333a-e096-40b7-9ce2-4181fefe48e6 1151 0 2025-03-17 17:56:10 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.26.100 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.105 [INFO][4096] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.394 [INFO][4106] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" HandleID="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Workload="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.410 [INFO][4106] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" HandleID="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Workload="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334370), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.100", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:56:11.394698117 +0000 UTC"}, Hostname:"172.31.26.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.410 [INFO][4106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.410 [INFO][4106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.410 [INFO][4106] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.100' Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.413 [INFO][4106] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.418 [INFO][4106] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.426 [INFO][4106] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.431 [INFO][4106] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.435 [INFO][4106] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.435 [INFO][4106] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.437 [INFO][4106] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0 Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.443 [INFO][4106] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.458 [INFO][4106] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.131/26] block=192.168.70.128/26 handle="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.458 [INFO][4106] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.131/26] handle="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" host="172.31.26.100" Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.458 [INFO][4106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:56:11.501244 containerd[1904]: 2025-03-17 17:56:11.458 [INFO][4106] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.131/26] IPv6=[] ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" HandleID="k8s-pod-network.ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Workload="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.502460 containerd[1904]: 2025-03-17 17:56:11.461 [INFO][4096] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"83e8333a-e096-40b7-9ce2-4181fefe48e6", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:56:11.502460 containerd[1904]: 2025-03-17 17:56:11.461 [INFO][4096] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.131/32] ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.502460 containerd[1904]: 2025-03-17 17:56:11.461 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.502460 containerd[1904]: 2025-03-17 17:56:11.469 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.503008 containerd[1904]: 2025-03-17 17:56:11.471 [INFO][4096] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"83e8333a-e096-40b7-9ce2-4181fefe48e6", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ba:4e:f6:62:53:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:56:11.503008 containerd[1904]: 2025-03-17 17:56:11.490 [INFO][4096] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.100-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:56:11.554663 containerd[1904]: time="2025-03-17T17:56:11.554171804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:11.554663 containerd[1904]: time="2025-03-17T17:56:11.554299106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:11.554663 containerd[1904]: time="2025-03-17T17:56:11.554321415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:11.559298 containerd[1904]: time="2025-03-17T17:56:11.555045864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:11.599952 systemd[1]: run-containerd-runc-k8s.io-ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0-runc.pr4fnx.mount: Deactivated successfully. Mar 17 17:56:11.613133 systemd[1]: Started cri-containerd-ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0.scope - libcontainer container ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0. Mar 17 17:56:11.703792 containerd[1904]: time="2025-03-17T17:56:11.703748353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:83e8333a-e096-40b7-9ce2-4181fefe48e6,Namespace:default,Attempt:0,} returns sandbox id \"ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0\"" Mar 17 17:56:11.706849 containerd[1904]: time="2025-03-17T17:56:11.706790426Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:56:12.026297 kubelet[2362]: E0317 17:56:12.026232 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:12.904349 systemd-networkd[1743]: cali60e51b789ff: Gained IPv6LL Mar 17 17:56:13.028302 kubelet[2362]: E0317 17:56:13.026379 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:14.028946 kubelet[2362]: E0317 17:56:14.028848 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:15.029516 kubelet[2362]: E0317 17:56:15.029475 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:15.254787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937616557.mount: Deactivated successfully. Mar 17 17:56:15.632774 ntpd[1877]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Mar 17 17:56:15.633170 ntpd[1877]: 17 Mar 17:56:15 ntpd[1877]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Mar 17 17:56:16.029664 kubelet[2362]: E0317 17:56:16.029626 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:17.030687 kubelet[2362]: E0317 17:56:17.030646 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:17.784909 containerd[1904]: time="2025-03-17T17:56:17.784855840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:17.788167 containerd[1904]: time="2025-03-17T17:56:17.788103559Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 17 17:56:17.793673 containerd[1904]: time="2025-03-17T17:56:17.793621496Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:17.804762 containerd[1904]: time="2025-03-17T17:56:17.800898341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:17.814046 containerd[1904]: time="2025-03-17T17:56:17.813886213Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.107048839s" Mar 17 17:56:17.814046 containerd[1904]: time="2025-03-17T17:56:17.813970155Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 17:56:17.842519 containerd[1904]: time="2025-03-17T17:56:17.842471282Z" level=info msg="CreateContainer within sandbox \"ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:56:17.865511 containerd[1904]: time="2025-03-17T17:56:17.865463419Z" level=info msg="CreateContainer within sandbox \"ef287845290f0ad9f284ebab47b56667f6fc0f79ea3dc9f04ef05d651a9bd2d0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"669ea52744bc8d87fa4474980eac57de90b2920d33eaa20fa068f1a9df9eecb5\"" Mar 17 17:56:17.867410 containerd[1904]: time="2025-03-17T17:56:17.866086753Z" level=info msg="StartContainer for \"669ea52744bc8d87fa4474980eac57de90b2920d33eaa20fa068f1a9df9eecb5\"" Mar 17 17:56:17.953565 systemd[1]: Started cri-containerd-669ea52744bc8d87fa4474980eac57de90b2920d33eaa20fa068f1a9df9eecb5.scope - libcontainer container 669ea52744bc8d87fa4474980eac57de90b2920d33eaa20fa068f1a9df9eecb5. Mar 17 17:56:17.988110 containerd[1904]: time="2025-03-17T17:56:17.988041445Z" level=info msg="StartContainer for \"669ea52744bc8d87fa4474980eac57de90b2920d33eaa20fa068f1a9df9eecb5\" returns successfully" Mar 17 17:56:18.031513 kubelet[2362]: E0317 17:56:18.031446 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:18.801740 kubelet[2362]: I0317 17:56:18.801671 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.682531689 podStartE2EDuration="8.793759611s" podCreationTimestamp="2025-03-17 17:56:10 +0000 UTC" firstStartedPulling="2025-03-17 17:56:11.706227637 +0000 UTC m=+50.673497123" lastFinishedPulling="2025-03-17 17:56:17.817455559 +0000 UTC m=+56.784725045" observedRunningTime="2025-03-17 17:56:18.780867216 +0000 UTC m=+57.748136715" watchObservedRunningTime="2025-03-17 17:56:18.793759611 +0000 UTC m=+57.761029106" Mar 17 17:56:19.032805 kubelet[2362]: E0317 17:56:19.032730 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:19.380462 kubelet[2362]: I0317 17:56:19.378948 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/863fb30f-75a2-4408-9032-f965fa34b0e6-tigera-ca-bundle\") pod \"calico-typha-777f69785d-t7pd9\" (UID: \"863fb30f-75a2-4408-9032-f965fa34b0e6\") " pod="calico-system/calico-typha-777f69785d-t7pd9" Mar 17 17:56:19.380462 kubelet[2362]: I0317 17:56:19.379044 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lcn5\" (UniqueName: \"kubernetes.io/projected/863fb30f-75a2-4408-9032-f965fa34b0e6-kube-api-access-5lcn5\") pod \"calico-typha-777f69785d-t7pd9\" (UID: \"863fb30f-75a2-4408-9032-f965fa34b0e6\") " pod="calico-system/calico-typha-777f69785d-t7pd9" Mar 17 17:56:19.380462 kubelet[2362]: I0317 17:56:19.379130 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/863fb30f-75a2-4408-9032-f965fa34b0e6-typha-certs\") pod \"calico-typha-777f69785d-t7pd9\" (UID: \"863fb30f-75a2-4408-9032-f965fa34b0e6\") " pod="calico-system/calico-typha-777f69785d-t7pd9" Mar 17 17:56:19.392952 systemd[1]: Created slice kubepods-besteffort-pod863fb30f_75a2_4408_9032_f965fa34b0e6.slice - libcontainer container kubepods-besteffort-pod863fb30f_75a2_4408_9032_f965fa34b0e6.slice. Mar 17 17:56:19.732404 systemd[1]: run-containerd-runc-k8s.io-5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954-runc.497ChX.mount: Deactivated successfully. Mar 17 17:56:19.746184 containerd[1904]: time="2025-03-17T17:56:19.745865609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777f69785d-t7pd9,Uid:863fb30f-75a2-4408-9032-f965fa34b0e6,Namespace:calico-system,Attempt:0,}" Mar 17 17:56:19.852000 containerd[1904]: time="2025-03-17T17:56:19.851432985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:19.852000 containerd[1904]: time="2025-03-17T17:56:19.851623850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:19.852000 containerd[1904]: time="2025-03-17T17:56:19.851649178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:19.852000 containerd[1904]: time="2025-03-17T17:56:19.851763260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:19.903146 systemd[1]: Started cri-containerd-53c560fc6dff6a78fc7a2580e0dd82043938b24fcfa9a43ae91d85ca1727594b.scope - libcontainer container 53c560fc6dff6a78fc7a2580e0dd82043938b24fcfa9a43ae91d85ca1727594b. Mar 17 17:56:19.970136 containerd[1904]: time="2025-03-17T17:56:19.970078255Z" level=info msg="StopContainer for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" with timeout 5 (s)" Mar 17 17:56:19.983201 containerd[1904]: time="2025-03-17T17:56:19.983050273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777f69785d-t7pd9,Uid:863fb30f-75a2-4408-9032-f965fa34b0e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"53c560fc6dff6a78fc7a2580e0dd82043938b24fcfa9a43ae91d85ca1727594b\"" Mar 17 17:56:19.984526 containerd[1904]: time="2025-03-17T17:56:19.984480543Z" level=info msg="Stop container \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" with signal terminated" Mar 17 17:56:19.985696 containerd[1904]: time="2025-03-17T17:56:19.985533483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:56:20.010775 systemd[1]: cri-containerd-5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954.scope: Deactivated successfully. Mar 17 17:56:20.011413 systemd[1]: cri-containerd-5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954.scope: Consumed 2.290s CPU time, 184.1M memory peak, 2M read from disk, 632K written to disk. Mar 17 17:56:20.033612 kubelet[2362]: E0317 17:56:20.033570 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:20.081862 containerd[1904]: time="2025-03-17T17:56:20.057032314Z" level=info msg="shim disconnected" id=5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954 namespace=k8s.io Mar 17 17:56:20.081862 containerd[1904]: time="2025-03-17T17:56:20.081862452Z" level=warning msg="cleaning up after shim disconnected" id=5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954 namespace=k8s.io Mar 17 17:56:20.090817 containerd[1904]: time="2025-03-17T17:56:20.081882336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:20.333717 containerd[1904]: time="2025-03-17T17:56:20.333516134Z" level=info msg="StopContainer for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" returns successfully" Mar 17 17:56:20.335772 containerd[1904]: time="2025-03-17T17:56:20.335731000Z" level=info msg="StopPodSandbox for \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\"" Mar 17 17:56:20.345253 containerd[1904]: time="2025-03-17T17:56:20.345052644Z" level=info msg="Container to stop \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:20.345253 containerd[1904]: time="2025-03-17T17:56:20.345244170Z" level=info msg="Container to stop \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:20.346151 containerd[1904]: time="2025-03-17T17:56:20.345283487Z" level=info msg="Container to stop \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:20.356774 systemd[1]: cri-containerd-ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc.scope: Deactivated successfully. Mar 17 17:56:20.407058 containerd[1904]: time="2025-03-17T17:56:20.406815529Z" level=info msg="shim disconnected" id=ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc namespace=k8s.io Mar 17 17:56:20.407058 containerd[1904]: time="2025-03-17T17:56:20.406868189Z" level=warning msg="cleaning up after shim disconnected" id=ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc namespace=k8s.io Mar 17 17:56:20.407058 containerd[1904]: time="2025-03-17T17:56:20.406879405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:20.459921 containerd[1904]: time="2025-03-17T17:56:20.459783931Z" level=info msg="TearDown network for sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" successfully" Mar 17 17:56:20.460586 containerd[1904]: time="2025-03-17T17:56:20.460363526Z" level=info msg="StopPodSandbox for \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" returns successfully" Mar 17 17:56:20.508738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954-rootfs.mount: Deactivated successfully. Mar 17 17:56:20.509364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc-rootfs.mount: Deactivated successfully. Mar 17 17:56:20.509505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc-shm.mount: Deactivated successfully. Mar 17 17:56:20.535353 kubelet[2362]: I0317 17:56:20.535319 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-bin-dir\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.536484 kubelet[2362]: I0317 17:56:20.535822 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-run-calico\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.536484 kubelet[2362]: I0317 17:56:20.535878 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-flexvol-driver-host\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.536484 kubelet[2362]: I0317 17:56:20.535900 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-lib-modules\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.536484 kubelet[2362]: I0317 17:56:20.535921 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-policysync\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.536484 kubelet[2362]: I0317 17:56:20.536021 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73374b66-2ee4-47c9-8aae-384bef6da462-node-certs\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.536484 kubelet[2362]: I0317 17:56:20.536046 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lg85\" (UniqueName: \"kubernetes.io/projected/73374b66-2ee4-47c9-8aae-384bef6da462-kube-api-access-8lg85\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.537912 kubelet[2362]: I0317 17:56:20.536061 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-log-dir\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.537912 kubelet[2362]: I0317 17:56:20.536077 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-lib-calico\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.537912 kubelet[2362]: I0317 17:56:20.536171 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-xtables-lock\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.537912 kubelet[2362]: I0317 17:56:20.536192 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-net-dir\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.537912 kubelet[2362]: I0317 17:56:20.536211 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73374b66-2ee4-47c9-8aae-384bef6da462-tigera-ca-bundle\") pod \"73374b66-2ee4-47c9-8aae-384bef6da462\" (UID: \"73374b66-2ee4-47c9-8aae-384bef6da462\") " Mar 17 17:56:20.542518 kubelet[2362]: I0317 17:56:20.538481 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.550238 kubelet[2362]: I0317 17:56:20.550001 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.550238 kubelet[2362]: I0317 17:56:20.550084 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.550238 kubelet[2362]: I0317 17:56:20.550114 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.550910 kubelet[2362]: I0317 17:56:20.550155 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-policysync" (OuterVolumeSpecName: "policysync") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.550910 kubelet[2362]: I0317 17:56:20.550662 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.553521 kubelet[2362]: I0317 17:56:20.551912 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73374b66-2ee4-47c9-8aae-384bef6da462-node-certs" (OuterVolumeSpecName: "node-certs") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:56:20.553521 kubelet[2362]: I0317 17:56:20.551974 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.553521 kubelet[2362]: I0317 17:56:20.552004 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.553521 kubelet[2362]: I0317 17:56:20.552029 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:20.559533 systemd[1]: var-lib-kubelet-pods-73374b66\x2d2ee4\x2d47c9\x2d8aae\x2d384bef6da462-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 17 17:56:20.563337 kubelet[2362]: I0317 17:56:20.563111 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73374b66-2ee4-47c9-8aae-384bef6da462-kube-api-access-8lg85" (OuterVolumeSpecName: "kube-api-access-8lg85") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "kube-api-access-8lg85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:20.568903 systemd[1]: var-lib-kubelet-pods-73374b66\x2d2ee4\x2d47c9\x2d8aae\x2d384bef6da462-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8lg85.mount: Deactivated successfully. Mar 17 17:56:20.583719 systemd[1]: var-lib-kubelet-pods-73374b66\x2d2ee4\x2d47c9\x2d8aae\x2d384bef6da462-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 17 17:56:20.586654 kubelet[2362]: I0317 17:56:20.584078 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73374b66-2ee4-47c9-8aae-384bef6da462-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "73374b66-2ee4-47c9-8aae-384bef6da462" (UID: "73374b66-2ee4-47c9-8aae-384bef6da462"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:20.643393 kubelet[2362]: I0317 17:56:20.643349 2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8lg85\" (UniqueName: \"kubernetes.io/projected/73374b66-2ee4-47c9-8aae-384bef6da462-kube-api-access-8lg85\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643592 2362 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-log-dir\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643610 2362 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-policysync\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643623 2362 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73374b66-2ee4-47c9-8aae-384bef6da462-node-certs\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643635 2362 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73374b66-2ee4-47c9-8aae-384bef6da462-tigera-ca-bundle\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: E0317 17:56:20.643635 2362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="73374b66-2ee4-47c9-8aae-384bef6da462" containerName="flexvol-driver" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643650 2362 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-lib-calico\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643662 2362 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-xtables-lock\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.644076 kubelet[2362]: E0317 17:56:20.643665 2362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="73374b66-2ee4-47c9-8aae-384bef6da462" containerName="install-cni" Mar 17 17:56:20.644076 kubelet[2362]: I0317 17:56:20.643675 2362 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-net-dir\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.646010 kubelet[2362]: I0317 17:56:20.643689 2362 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-cni-bin-dir\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.646010 kubelet[2362]: I0317 17:56:20.644002 2362 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-var-run-calico\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.646010 kubelet[2362]: E0317 17:56:20.643677 2362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="73374b66-2ee4-47c9-8aae-384bef6da462" containerName="calico-node" Mar 17 17:56:20.646010 kubelet[2362]: I0317 17:56:20.644031 2362 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-lib-modules\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.646010 kubelet[2362]: I0317 17:56:20.644045 2362 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73374b66-2ee4-47c9-8aae-384bef6da462-flexvol-driver-host\") on node \"172.31.26.100\" DevicePath \"\"" Mar 17 17:56:20.649507 kubelet[2362]: I0317 17:56:20.647960 2362 memory_manager.go:354] "RemoveStaleState removing state" podUID="73374b66-2ee4-47c9-8aae-384bef6da462" containerName="calico-node" Mar 17 17:56:20.664823 systemd[1]: Created slice kubepods-besteffort-pod7c6e4467_ce2e_4997_b4da_5e7d786827e5.slice - libcontainer container kubepods-besteffort-pod7c6e4467_ce2e_4997_b4da_5e7d786827e5.slice. Mar 17 17:56:20.691629 kubelet[2362]: I0317 17:56:20.691595 2362 scope.go:117] "RemoveContainer" containerID="5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954" Mar 17 17:56:20.695406 containerd[1904]: time="2025-03-17T17:56:20.694999523Z" level=info msg="RemoveContainer for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\"" Mar 17 17:56:20.696108 systemd[1]: Removed slice kubepods-besteffort-pod73374b66_2ee4_47c9_8aae_384bef6da462.slice - libcontainer container kubepods-besteffort-pod73374b66_2ee4_47c9_8aae_384bef6da462.slice. Mar 17 17:56:20.696258 systemd[1]: kubepods-besteffort-pod73374b66_2ee4_47c9_8aae_384bef6da462.slice: Consumed 2.841s CPU time, 264.7M memory peak, 2M read from disk, 161M written to disk. Mar 17 17:56:20.706850 containerd[1904]: time="2025-03-17T17:56:20.706801452Z" level=info msg="RemoveContainer for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" returns successfully" Mar 17 17:56:20.709786 kubelet[2362]: I0317 17:56:20.709741 2362 scope.go:117] "RemoveContainer" containerID="5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79" Mar 17 17:56:20.712402 containerd[1904]: time="2025-03-17T17:56:20.712228596Z" level=info msg="RemoveContainer for \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\"" Mar 17 17:56:20.715872 containerd[1904]: time="2025-03-17T17:56:20.715825083Z" level=info msg="RemoveContainer for \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\" returns successfully" Mar 17 17:56:20.716205 kubelet[2362]: I0317 17:56:20.716174 2362 scope.go:117] "RemoveContainer" containerID="1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53" Mar 17 17:56:20.724461 containerd[1904]: time="2025-03-17T17:56:20.724421227Z" level=info msg="RemoveContainer for \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\"" Mar 17 17:56:20.728186 containerd[1904]: time="2025-03-17T17:56:20.728135588Z" level=info msg="RemoveContainer for \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\" returns successfully" Mar 17 17:56:20.728469 kubelet[2362]: I0317 17:56:20.728430 2362 scope.go:117] "RemoveContainer" containerID="5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954" Mar 17 17:56:20.728719 containerd[1904]: time="2025-03-17T17:56:20.728686621Z" level=error msg="ContainerStatus for \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\": not found" Mar 17 17:56:20.733531 kubelet[2362]: E0317 17:56:20.733475 2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\": not found" containerID="5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954" Mar 17 17:56:20.741039 kubelet[2362]: I0317 17:56:20.733544 2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954"} err="failed to get container status \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b2089dd9078b6e1d60db3a352281c39ee25722e4e627bc8303426af632c0954\": not found" Mar 17 17:56:20.741197 kubelet[2362]: I0317 17:56:20.741048 2362 scope.go:117] "RemoveContainer" containerID="5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79" Mar 17 17:56:20.741477 containerd[1904]: time="2025-03-17T17:56:20.741429913Z" level=error msg="ContainerStatus for \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\": not found" Mar 17 17:56:20.741715 kubelet[2362]: E0317 17:56:20.741690 2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\": not found" containerID="5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79" Mar 17 17:56:20.741788 kubelet[2362]: I0317 17:56:20.741724 2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79"} err="failed to get container status \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\": rpc error: code = NotFound desc = an error occurred when try to find container \"5274a9d48c7e8a17d4eafbddd3cde1050fd1ac056aea007a97a44ab40e9d5d79\": not found" Mar 17 17:56:20.741788 kubelet[2362]: I0317 17:56:20.741748 2362 scope.go:117] "RemoveContainer" containerID="1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53" Mar 17 17:56:20.742010 containerd[1904]: time="2025-03-17T17:56:20.741970042Z" level=error msg="ContainerStatus for \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\": not found" Mar 17 17:56:20.742132 kubelet[2362]: E0317 17:56:20.742107 2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\": not found" containerID="1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53" Mar 17 17:56:20.742201 kubelet[2362]: I0317 17:56:20.742137 2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53"} err="failed to get container status \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\": rpc error: code = NotFound desc = an error occurred when try to find container \"1873fbea95adf1f791e936e0eaa1bd0ac6ded8a8fb68abe1eb19b8e6b9aa0b53\": not found" Mar 17 17:56:20.744475 kubelet[2362]: I0317 17:56:20.744445 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c6e4467-ce2e-4997-b4da-5e7d786827e5-tigera-ca-bundle\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744582 kubelet[2362]: I0317 17:56:20.744485 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-var-run-calico\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744582 kubelet[2362]: I0317 17:56:20.744513 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-xtables-lock\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744582 kubelet[2362]: I0317 17:56:20.744538 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-cni-bin-dir\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744582 kubelet[2362]: I0317 17:56:20.744566 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-cni-log-dir\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744758 kubelet[2362]: I0317 17:56:20.744592 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kb62\" (UniqueName: \"kubernetes.io/projected/7c6e4467-ce2e-4997-b4da-5e7d786827e5-kube-api-access-6kb62\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744758 kubelet[2362]: I0317 17:56:20.744619 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-var-lib-calico\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744758 kubelet[2362]: I0317 17:56:20.744644 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-lib-modules\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744758 kubelet[2362]: I0317 17:56:20.744669 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-policysync\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744758 kubelet[2362]: I0317 17:56:20.744695 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c6e4467-ce2e-4997-b4da-5e7d786827e5-node-certs\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744934 kubelet[2362]: I0317 17:56:20.744719 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-cni-net-dir\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.744934 kubelet[2362]: I0317 17:56:20.744744 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c6e4467-ce2e-4997-b4da-5e7d786827e5-flexvol-driver-host\") pod \"calico-node-2gs88\" (UID: \"7c6e4467-ce2e-4997-b4da-5e7d786827e5\") " pod="calico-system/calico-node-2gs88" Mar 17 17:56:20.981264 containerd[1904]: time="2025-03-17T17:56:20.981218386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gs88,Uid:7c6e4467-ce2e-4997-b4da-5e7d786827e5,Namespace:calico-system,Attempt:0,}" Mar 17 17:56:21.035301 kubelet[2362]: E0317 17:56:21.035218 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:21.035832 containerd[1904]: time="2025-03-17T17:56:21.033102348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:21.035832 containerd[1904]: time="2025-03-17T17:56:21.033406872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:21.035832 containerd[1904]: time="2025-03-17T17:56:21.034856436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:21.036822 containerd[1904]: time="2025-03-17T17:56:21.036733443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:21.074514 systemd[1]: Started cri-containerd-32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914.scope - libcontainer container 32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914. Mar 17 17:56:21.125703 containerd[1904]: time="2025-03-17T17:56:21.125655982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gs88,Uid:7c6e4467-ce2e-4997-b4da-5e7d786827e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\"" Mar 17 17:56:21.129537 containerd[1904]: time="2025-03-17T17:56:21.129389802Z" level=info msg="CreateContainer within sandbox \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:56:21.148086 containerd[1904]: time="2025-03-17T17:56:21.148040030Z" level=info msg="CreateContainer within sandbox \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade\"" Mar 17 17:56:21.148772 containerd[1904]: time="2025-03-17T17:56:21.148744122Z" level=info msg="StartContainer for \"924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade\"" Mar 17 17:56:21.183597 systemd[1]: Started cri-containerd-924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade.scope - libcontainer container 924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade. Mar 17 17:56:21.222637 containerd[1904]: time="2025-03-17T17:56:21.222519665Z" level=info msg="StartContainer for \"924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade\" returns successfully" Mar 17 17:56:21.334946 systemd[1]: cri-containerd-924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade.scope: Deactivated successfully. Mar 17 17:56:21.335427 systemd[1]: cri-containerd-924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade.scope: Consumed 35ms CPU time, 19.3M memory peak, 11.5M read from disk, 6.3M written to disk. Mar 17 17:56:21.421116 containerd[1904]: time="2025-03-17T17:56:21.420850253Z" level=info msg="shim disconnected" id=924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade namespace=k8s.io Mar 17 17:56:21.421116 containerd[1904]: time="2025-03-17T17:56:21.420926561Z" level=warning msg="cleaning up after shim disconnected" id=924918eefaaa82bb81cf1aaf7e00b4c291e5bf4149d20a2c99baa7a7d8267ade namespace=k8s.io Mar 17 17:56:21.421116 containerd[1904]: time="2025-03-17T17:56:21.420941010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:21.704256 containerd[1904]: time="2025-03-17T17:56:21.704097037Z" level=info msg="CreateContainer within sandbox \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:56:21.761922 containerd[1904]: time="2025-03-17T17:56:21.761858351Z" level=info msg="CreateContainer within sandbox \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce\"" Mar 17 17:56:21.769536 containerd[1904]: time="2025-03-17T17:56:21.769488698Z" level=info msg="StartContainer for \"e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce\"" Mar 17 17:56:21.837846 systemd[1]: run-containerd-runc-k8s.io-e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce-runc.AoxR0g.mount: Deactivated successfully. Mar 17 17:56:21.848534 systemd[1]: Started cri-containerd-e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce.scope - libcontainer container e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce. Mar 17 17:56:21.943438 containerd[1904]: time="2025-03-17T17:56:21.943250895Z" level=info msg="StartContainer for \"e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce\" returns successfully" Mar 17 17:56:21.988325 kubelet[2362]: E0317 17:56:21.983445 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:22.032047 containerd[1904]: time="2025-03-17T17:56:22.032004722Z" level=info msg="StopPodSandbox for \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\"" Mar 17 17:56:22.032627 containerd[1904]: time="2025-03-17T17:56:22.032113339Z" level=info msg="TearDown network for sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" successfully" Mar 17 17:56:22.032627 containerd[1904]: time="2025-03-17T17:56:22.032180921Z" level=info msg="StopPodSandbox for \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" returns successfully" Mar 17 17:56:22.035982 kubelet[2362]: E0317 17:56:22.035927 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:22.039139 containerd[1904]: time="2025-03-17T17:56:22.039087726Z" level=info msg="RemovePodSandbox for \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\"" Mar 17 17:56:22.039484 containerd[1904]: time="2025-03-17T17:56:22.039458461Z" level=info msg="Forcibly stopping sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\"" Mar 17 17:56:22.039788 containerd[1904]: time="2025-03-17T17:56:22.039645811Z" level=info msg="TearDown network for sandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" successfully" Mar 17 17:56:22.049636 containerd[1904]: time="2025-03-17T17:56:22.049586760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.050087 containerd[1904]: time="2025-03-17T17:56:22.049660060Z" level=info msg="RemovePodSandbox \"ba72f6ffba1bf7feff55180a0460c1d4e403a670c2a454b495d02749d0fe2cbc\" returns successfully" Mar 17 17:56:22.050841 containerd[1904]: time="2025-03-17T17:56:22.050425049Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:56:22.050841 containerd[1904]: time="2025-03-17T17:56:22.050551046Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:56:22.050841 containerd[1904]: time="2025-03-17T17:56:22.050567044Z" level=info msg="StopPodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:56:22.051185 containerd[1904]: time="2025-03-17T17:56:22.051161541Z" level=info msg="RemovePodSandbox for \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:56:22.051371 containerd[1904]: time="2025-03-17T17:56:22.051352666Z" level=info msg="Forcibly stopping sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\"" Mar 17 17:56:22.051587 containerd[1904]: time="2025-03-17T17:56:22.051537943Z" level=info msg="TearDown network for sandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" successfully" Mar 17 17:56:22.059307 containerd[1904]: time="2025-03-17T17:56:22.059247739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.059516 containerd[1904]: time="2025-03-17T17:56:22.059496003Z" level=info msg="RemovePodSandbox \"d165655ce0b40c8d33ab1b086e897473b1dee29df96aef351cfa7ec84d48b00c\" returns successfully" Mar 17 17:56:22.061167 containerd[1904]: time="2025-03-17T17:56:22.060442133Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:56:22.061167 containerd[1904]: time="2025-03-17T17:56:22.060559841Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:56:22.061167 containerd[1904]: time="2025-03-17T17:56:22.060575206Z" level=info msg="StopPodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:56:22.062360 containerd[1904]: time="2025-03-17T17:56:22.062333692Z" level=info msg="RemovePodSandbox for \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:56:22.063252 containerd[1904]: time="2025-03-17T17:56:22.063227652Z" level=info msg="Forcibly stopping sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\"" Mar 17 17:56:22.063717 containerd[1904]: time="2025-03-17T17:56:22.063570312Z" level=info msg="TearDown network for sandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" successfully" Mar 17 17:56:22.073631 containerd[1904]: time="2025-03-17T17:56:22.073585825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.073875 containerd[1904]: time="2025-03-17T17:56:22.073645183Z" level=info msg="RemovePodSandbox \"49060f60af073ec7b83a2b018274c790300ef555859b2b6526084e37db9333d9\" returns successfully" Mar 17 17:56:22.074432 containerd[1904]: time="2025-03-17T17:56:22.074400069Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:56:22.074753 containerd[1904]: time="2025-03-17T17:56:22.074582007Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:56:22.074753 containerd[1904]: time="2025-03-17T17:56:22.074599317Z" level=info msg="StopPodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:56:22.075449 containerd[1904]: time="2025-03-17T17:56:22.075407642Z" level=info msg="RemovePodSandbox for \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:56:22.075449 containerd[1904]: time="2025-03-17T17:56:22.075438111Z" level=info msg="Forcibly stopping sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\"" Mar 17 17:56:22.075783 containerd[1904]: time="2025-03-17T17:56:22.075680757Z" level=info msg="TearDown network for sandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" successfully" Mar 17 17:56:22.089490 containerd[1904]: time="2025-03-17T17:56:22.089442851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.089666 containerd[1904]: time="2025-03-17T17:56:22.089505180Z" level=info msg="RemovePodSandbox \"a956a9cf75e62971dec3be6ba4c36c9ae323cddff2f78c934850cade99228e60\" returns successfully" Mar 17 17:56:22.090363 containerd[1904]: time="2025-03-17T17:56:22.090320975Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:56:22.090486 containerd[1904]: time="2025-03-17T17:56:22.090460645Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:56:22.090563 containerd[1904]: time="2025-03-17T17:56:22.090483338Z" level=info msg="StopPodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:56:22.091833 containerd[1904]: time="2025-03-17T17:56:22.091786973Z" level=info msg="RemovePodSandbox for \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:56:22.091937 containerd[1904]: time="2025-03-17T17:56:22.091837289Z" level=info msg="Forcibly stopping sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\"" Mar 17 17:56:22.092047 containerd[1904]: time="2025-03-17T17:56:22.091931487Z" level=info msg="TearDown network for sandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" successfully" Mar 17 17:56:22.101532 containerd[1904]: time="2025-03-17T17:56:22.101413439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.101532 containerd[1904]: time="2025-03-17T17:56:22.101488117Z" level=info msg="RemovePodSandbox \"574f8b02148045276d6bb4ae68f3d91717c27939a382ee5888b378fa81bcd735\" returns successfully" Mar 17 17:56:22.102151 containerd[1904]: time="2025-03-17T17:56:22.102113085Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:56:22.102262 containerd[1904]: time="2025-03-17T17:56:22.102237863Z" level=info msg="TearDown network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" successfully" Mar 17 17:56:22.102262 containerd[1904]: time="2025-03-17T17:56:22.102253096Z" level=info msg="StopPodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" returns successfully" Mar 17 17:56:22.104412 containerd[1904]: time="2025-03-17T17:56:22.104384599Z" level=info msg="RemovePodSandbox for \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:56:22.104524 containerd[1904]: time="2025-03-17T17:56:22.104420331Z" level=info msg="Forcibly stopping sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\"" Mar 17 17:56:22.104574 containerd[1904]: time="2025-03-17T17:56:22.104516692Z" level=info msg="TearDown network for sandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" successfully" Mar 17 17:56:22.111454 containerd[1904]: time="2025-03-17T17:56:22.111395227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.111599 containerd[1904]: time="2025-03-17T17:56:22.111466593Z" level=info msg="RemovePodSandbox \"2c085d60e153a9d5494f881f9af7697dc10056ea3efb5fe12910d44c6fab6d68\" returns successfully" Mar 17 17:56:22.113383 containerd[1904]: time="2025-03-17T17:56:22.113324846Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" Mar 17 17:56:22.113662 containerd[1904]: time="2025-03-17T17:56:22.113537722Z" level=info msg="TearDown network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" successfully" Mar 17 17:56:22.113662 containerd[1904]: time="2025-03-17T17:56:22.113557943Z" level=info msg="StopPodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" returns successfully" Mar 17 17:56:22.114916 containerd[1904]: time="2025-03-17T17:56:22.114886315Z" level=info msg="RemovePodSandbox for \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" Mar 17 17:56:22.115025 containerd[1904]: time="2025-03-17T17:56:22.114924707Z" level=info msg="Forcibly stopping sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\"" Mar 17 17:56:22.115081 containerd[1904]: time="2025-03-17T17:56:22.115036086Z" level=info msg="TearDown network for sandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" successfully" Mar 17 17:56:22.121334 containerd[1904]: time="2025-03-17T17:56:22.121174322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.121334 containerd[1904]: time="2025-03-17T17:56:22.121246049Z" level=info msg="RemovePodSandbox \"04142e3ec754739aec4ab174e47c767b77a9b94354bada57fa4014aca1fe364f\" returns successfully" Mar 17 17:56:22.122261 containerd[1904]: time="2025-03-17T17:56:22.122228806Z" level=info msg="StopPodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\"" Mar 17 17:56:22.122393 containerd[1904]: time="2025-03-17T17:56:22.122356410Z" level=info msg="TearDown network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" successfully" Mar 17 17:56:22.122393 containerd[1904]: time="2025-03-17T17:56:22.122372231Z" level=info msg="StopPodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" returns successfully" Mar 17 17:56:22.123206 containerd[1904]: time="2025-03-17T17:56:22.123088859Z" level=info msg="RemovePodSandbox for \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\"" Mar 17 17:56:22.123206 containerd[1904]: time="2025-03-17T17:56:22.123124006Z" level=info msg="Forcibly stopping sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\"" Mar 17 17:56:22.123392 containerd[1904]: time="2025-03-17T17:56:22.123215647Z" level=info msg="TearDown network for sandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" successfully" Mar 17 17:56:22.128870 containerd[1904]: time="2025-03-17T17:56:22.128663932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.128870 containerd[1904]: time="2025-03-17T17:56:22.128740224Z" level=info msg="RemovePodSandbox \"febc05d41ce8fa6c8d29a718242cf2106b77464c434e1ed8c6d6debc1e40efdb\" returns successfully" Mar 17 17:56:22.129580 containerd[1904]: time="2025-03-17T17:56:22.129545215Z" level=info msg="StopPodSandbox for \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\"" Mar 17 17:56:22.129696 containerd[1904]: time="2025-03-17T17:56:22.129664174Z" level=info msg="TearDown network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\" successfully" Mar 17 17:56:22.129696 containerd[1904]: time="2025-03-17T17:56:22.129679035Z" level=info msg="StopPodSandbox for \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\" returns successfully" Mar 17 17:56:22.130565 containerd[1904]: time="2025-03-17T17:56:22.130440770Z" level=info msg="RemovePodSandbox for \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\"" Mar 17 17:56:22.130565 containerd[1904]: time="2025-03-17T17:56:22.130472046Z" level=info msg="Forcibly stopping sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\"" Mar 17 17:56:22.130808 containerd[1904]: time="2025-03-17T17:56:22.130560856Z" level=info msg="TearDown network for sandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\" successfully" Mar 17 17:56:22.136536 containerd[1904]: time="2025-03-17T17:56:22.136333695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.136536 containerd[1904]: time="2025-03-17T17:56:22.136405191Z" level=info msg="RemovePodSandbox \"6ea1dab503561b54eb76e956af304220e0e830c9ee1c732d1f3cd234b9a525ab\" returns successfully" Mar 17 17:56:22.137164 containerd[1904]: time="2025-03-17T17:56:22.137132606Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" Mar 17 17:56:22.137280 containerd[1904]: time="2025-03-17T17:56:22.137250758Z" level=info msg="TearDown network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" successfully" Mar 17 17:56:22.137709 containerd[1904]: time="2025-03-17T17:56:22.137569732Z" level=info msg="StopPodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" returns successfully" Mar 17 17:56:22.138168 containerd[1904]: time="2025-03-17T17:56:22.138142886Z" level=info msg="RemovePodSandbox for \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" Mar 17 17:56:22.138255 containerd[1904]: time="2025-03-17T17:56:22.138176890Z" level=info msg="Forcibly stopping sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\"" Mar 17 17:56:22.138331 containerd[1904]: time="2025-03-17T17:56:22.138279686Z" level=info msg="TearDown network for sandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" successfully" Mar 17 17:56:22.144409 containerd[1904]: time="2025-03-17T17:56:22.144295096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.144409 containerd[1904]: time="2025-03-17T17:56:22.144365085Z" level=info msg="RemovePodSandbox \"765d33412ac297027722679f4c9d7f40ef617ee87c4e11f92dfd539397468bdd\" returns successfully" Mar 17 17:56:22.144880 containerd[1904]: time="2025-03-17T17:56:22.144838435Z" level=info msg="StopPodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\"" Mar 17 17:56:22.144988 containerd[1904]: time="2025-03-17T17:56:22.144956812Z" level=info msg="TearDown network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" successfully" Mar 17 17:56:22.144988 containerd[1904]: time="2025-03-17T17:56:22.144978140Z" level=info msg="StopPodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" returns successfully" Mar 17 17:56:22.145936 containerd[1904]: time="2025-03-17T17:56:22.145778534Z" level=info msg="RemovePodSandbox for \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\"" Mar 17 17:56:22.145936 containerd[1904]: time="2025-03-17T17:56:22.145884233Z" level=info msg="Forcibly stopping sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\"" Mar 17 17:56:22.146081 containerd[1904]: time="2025-03-17T17:56:22.146018854Z" level=info msg="TearDown network for sandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" successfully" Mar 17 17:56:22.152204 containerd[1904]: time="2025-03-17T17:56:22.152012412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.152204 containerd[1904]: time="2025-03-17T17:56:22.152079820Z" level=info msg="RemovePodSandbox \"539aa4206bbc52cd553da2b8d303b5ff931544a96650bba14037003ae374c7c0\" returns successfully" Mar 17 17:56:22.152567 containerd[1904]: time="2025-03-17T17:56:22.152540010Z" level=info msg="StopPodSandbox for \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\"" Mar 17 17:56:22.152735 containerd[1904]: time="2025-03-17T17:56:22.152659177Z" level=info msg="TearDown network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\" successfully" Mar 17 17:56:22.152735 containerd[1904]: time="2025-03-17T17:56:22.152677782Z" level=info msg="StopPodSandbox for \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\" returns successfully" Mar 17 17:56:22.153858 containerd[1904]: time="2025-03-17T17:56:22.153743112Z" level=info msg="RemovePodSandbox for \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\"" Mar 17 17:56:22.153858 containerd[1904]: time="2025-03-17T17:56:22.153776500Z" level=info msg="Forcibly stopping sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\"" Mar 17 17:56:22.154002 containerd[1904]: time="2025-03-17T17:56:22.153893156Z" level=info msg="TearDown network for sandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\" successfully" Mar 17 17:56:22.160547 containerd[1904]: time="2025-03-17T17:56:22.160072014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:22.160547 containerd[1904]: time="2025-03-17T17:56:22.160142643Z" level=info msg="RemovePodSandbox \"cc4f4a292acea3a35843b5cb000ebbaf1f8ea0454fca1c7cfed41a6df9e38bba\" returns successfully" Mar 17 17:56:22.210434 kubelet[2362]: I0317 17:56:22.210391 2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73374b66-2ee4-47c9-8aae-384bef6da462" path="/var/lib/kubelet/pods/73374b66-2ee4-47c9-8aae-384bef6da462/volumes" Mar 17 17:56:22.457094 containerd[1904]: time="2025-03-17T17:56:22.457030930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:22.458940 containerd[1904]: time="2025-03-17T17:56:22.458745768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 17 17:56:22.462016 containerd[1904]: time="2025-03-17T17:56:22.460854953Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:22.465039 containerd[1904]: time="2025-03-17T17:56:22.464082436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:22.465039 containerd[1904]: time="2025-03-17T17:56:22.464893609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 2.479321918s" Mar 17 17:56:22.465039 containerd[1904]: time="2025-03-17T17:56:22.464932308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 17 17:56:22.472954 containerd[1904]: time="2025-03-17T17:56:22.472907456Z" level=info msg="CreateContainer within sandbox \"53c560fc6dff6a78fc7a2580e0dd82043938b24fcfa9a43ae91d85ca1727594b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:56:22.514640 containerd[1904]: time="2025-03-17T17:56:22.514594763Z" level=info msg="CreateContainer within sandbox \"53c560fc6dff6a78fc7a2580e0dd82043938b24fcfa9a43ae91d85ca1727594b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f37366efed5db0ae4ebeed34aebaa80296d2dce024317a1c895372473a036c72\"" Mar 17 17:56:22.515487 containerd[1904]: time="2025-03-17T17:56:22.515457417Z" level=info msg="StartContainer for \"f37366efed5db0ae4ebeed34aebaa80296d2dce024317a1c895372473a036c72\"" Mar 17 17:56:22.557545 systemd[1]: Started cri-containerd-f37366efed5db0ae4ebeed34aebaa80296d2dce024317a1c895372473a036c72.scope - libcontainer container f37366efed5db0ae4ebeed34aebaa80296d2dce024317a1c895372473a036c72. Mar 17 17:56:22.626151 containerd[1904]: time="2025-03-17T17:56:22.625563863Z" level=info msg="StartContainer for \"f37366efed5db0ae4ebeed34aebaa80296d2dce024317a1c895372473a036c72\" returns successfully" Mar 17 17:56:22.861014 kubelet[2362]: I0317 17:56:22.860502 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-777f69785d-t7pd9" podStartSLOduration=1.379520264 podStartE2EDuration="3.860487059s" podCreationTimestamp="2025-03-17 17:56:19 +0000 UTC" firstStartedPulling="2025-03-17 17:56:19.984933097 +0000 UTC m=+58.952202589" lastFinishedPulling="2025-03-17 17:56:22.465899894 +0000 UTC m=+61.433169384" observedRunningTime="2025-03-17 17:56:22.831164732 +0000 UTC m=+61.798434228" watchObservedRunningTime="2025-03-17 17:56:22.860487059 +0000 UTC m=+61.827756555" Mar 17 17:56:23.036974 kubelet[2362]: E0317 17:56:23.036915 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:23.496331 systemd[1]: run-containerd-runc-k8s.io-f37366efed5db0ae4ebeed34aebaa80296d2dce024317a1c895372473a036c72-runc.GY5kjS.mount: Deactivated successfully. Mar 17 17:56:24.037583 kubelet[2362]: E0317 17:56:24.037485 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:25.038423 kubelet[2362]: E0317 17:56:25.038363 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:26.039011 kubelet[2362]: E0317 17:56:26.038918 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:27.040236 kubelet[2362]: E0317 17:56:27.040169 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:28.040430 kubelet[2362]: E0317 17:56:28.040370 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:29.040769 kubelet[2362]: E0317 17:56:29.040712 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:30.041328 kubelet[2362]: E0317 17:56:30.041257 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:31.041967 kubelet[2362]: E0317 17:56:31.041911 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:31.971799 systemd[1]: cri-containerd-e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce.scope: Deactivated successfully. Mar 17 17:56:31.974888 systemd[1]: cri-containerd-e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce.scope: Consumed 982ms CPU time, 116.7M memory peak, 102.2M read from disk. Mar 17 17:56:32.011380 containerd[1904]: time="2025-03-17T17:56:32.010982682Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Mar 17 17:56:32.041113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce-rootfs.mount: Deactivated successfully. Mar 17 17:56:32.042940 kubelet[2362]: E0317 17:56:32.042357 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:32.061505 containerd[1904]: time="2025-03-17T17:56:32.061439778Z" level=info msg="shim disconnected" id=e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce namespace=k8s.io Mar 17 17:56:32.061505 containerd[1904]: time="2025-03-17T17:56:32.061499437Z" level=warning msg="cleaning up after shim disconnected" id=e5b976de08bc8d15c186700f2365a5410562be84ec7d72af97c884acbc7935ce namespace=k8s.io Mar 17 17:56:32.061865 containerd[1904]: time="2025-03-17T17:56:32.061572465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:32.809000 containerd[1904]: time="2025-03-17T17:56:32.808960017Z" level=info msg="CreateContainer within sandbox \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:56:32.832253 containerd[1904]: time="2025-03-17T17:56:32.832194099Z" level=info msg="CreateContainer within sandbox \"32935755556b7610949419406a2afdb12dfb38aa9f842406f6cc74ba37d8b914\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"70d6e82e6fc09053dea8492d97ccb7ab45ad88853054270886fb656ade941995\"" Mar 17 17:56:32.832964 containerd[1904]: time="2025-03-17T17:56:32.832931460Z" level=info msg="StartContainer for \"70d6e82e6fc09053dea8492d97ccb7ab45ad88853054270886fb656ade941995\"" Mar 17 17:56:32.870525 systemd[1]: Started cri-containerd-70d6e82e6fc09053dea8492d97ccb7ab45ad88853054270886fb656ade941995.scope - libcontainer container 70d6e82e6fc09053dea8492d97ccb7ab45ad88853054270886fb656ade941995. Mar 17 17:56:32.915506 containerd[1904]: time="2025-03-17T17:56:32.915460858Z" level=info msg="StartContainer for \"70d6e82e6fc09053dea8492d97ccb7ab45ad88853054270886fb656ade941995\" returns successfully" Mar 17 17:56:33.043222 kubelet[2362]: E0317 17:56:33.043130 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:33.895916 kubelet[2362]: I0317 17:56:33.894683 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2gs88" podStartSLOduration=13.894661108 podStartE2EDuration="13.894661108s" podCreationTimestamp="2025-03-17 17:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:33.891578244 +0000 UTC m=+72.858847745" watchObservedRunningTime="2025-03-17 17:56:33.894661108 +0000 UTC m=+72.861930605" Mar 17 17:56:34.044143 kubelet[2362]: E0317 17:56:34.044069 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:35.045035 kubelet[2362]: E0317 17:56:35.044951 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:35.495591 systemd[1]: Created slice kubepods-besteffort-pod09079a80_0eff_4029_a3c1_ec86f2679107.slice - libcontainer container kubepods-besteffort-pod09079a80_0eff_4029_a3c1_ec86f2679107.slice. Mar 17 17:56:35.563465 kubelet[2362]: I0317 17:56:35.562976 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3f8f76b-36b3-487b-b367-8c1289ec338a\" (UniqueName: \"kubernetes.io/nfs/09079a80-0eff-4029-a3c1-ec86f2679107-pvc-f3f8f76b-36b3-487b-b367-8c1289ec338a\") pod \"test-pod-1\" (UID: \"09079a80-0eff-4029-a3c1-ec86f2679107\") " pod="default/test-pod-1" Mar 17 17:56:35.563465 kubelet[2362]: I0317 17:56:35.563094 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cnd\" (UniqueName: \"kubernetes.io/projected/09079a80-0eff-4029-a3c1-ec86f2679107-kube-api-access-h4cnd\") pod \"test-pod-1\" (UID: \"09079a80-0eff-4029-a3c1-ec86f2679107\") " pod="default/test-pod-1" Mar 17 17:56:35.775007 (udev-worker)[4946]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:56:35.777020 (udev-worker)[4947]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:56:35.910302 kernel: FS-Cache: Loaded Mar 17 17:56:36.048710 kubelet[2362]: E0317 17:56:36.048160 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:36.050929 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:56:36.051021 kernel: RPC: Registered udp transport module. Mar 17 17:56:36.051050 kernel: RPC: Registered tcp transport module. Mar 17 17:56:36.051078 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:56:36.051104 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:56:36.527482 kernel: NFS: Registering the id_resolver key type Mar 17 17:56:36.527796 kernel: Key type id_resolver registered Mar 17 17:56:36.527857 kernel: Key type id_legacy registered Mar 17 17:56:36.653221 nfsidmap[4989]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:56:36.656716 nfsidmap[4990]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:56:36.701728 containerd[1904]: time="2025-03-17T17:56:36.701679106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:09079a80-0eff-4029-a3c1-ec86f2679107,Namespace:default,Attempt:0,}" Mar 17 17:56:37.049048 kubelet[2362]: E0317 17:56:37.048928 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:37.358484 systemd-networkd[1743]: cali5ec59c6bf6e: Link UP Mar 17 17:56:37.358712 (udev-worker)[4951]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:56:37.361737 systemd-networkd[1743]: cali5ec59c6bf6e: Gained carrier Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.017 [INFO][4992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.100-k8s-test--pod--1-eth0 default 09079a80-0eff-4029-a3c1-ec86f2679107 1433 0 2025-03-17 17:56:11 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.100 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.017 [INFO][4992] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.070 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" HandleID="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Workload="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.209 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" HandleID="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Workload="172.31.26.100-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295420), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.100", "pod":"test-pod-1", "timestamp":"2025-03-17 17:56:37.070616117 +0000 UTC"}, Hostname:"172.31.26.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.210 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.210 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.210 [INFO][5005] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.100' Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.216 [INFO][5005] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.301 [INFO][5005] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.311 [INFO][5005] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.315 [INFO][5005] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.319 [INFO][5005] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.319 [INFO][5005] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.324 [INFO][5005] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.333 [INFO][5005] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.350 [INFO][5005] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.132/26] block=192.168.70.128/26 handle="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.350 [INFO][5005] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.132/26] handle="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" host="172.31.26.100" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.350 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.350 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.132/26] IPv6=[] ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" HandleID="k8s-pod-network.ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Workload="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.400780 containerd[1904]: 2025-03-17 17:56:37.355 [INFO][4992] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"09079a80-0eff-4029-a3c1-ec86f2679107", ResourceVersion:"1433", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:56:37.402177 containerd[1904]: 2025-03-17 17:56:37.355 [INFO][4992] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.132/32] ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.402177 containerd[1904]: 2025-03-17 17:56:37.355 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.402177 containerd[1904]: 2025-03-17 17:56:37.363 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.402177 containerd[1904]: 2025-03-17 17:56:37.367 [INFO][4992] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"09079a80-0eff-4029-a3c1-ec86f2679107", ResourceVersion:"1433", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"16:e4:e9:bf:bc:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:56:37.402177 containerd[1904]: 2025-03-17 17:56:37.390 [INFO][4992] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.100-k8s-test--pod--1-eth0" Mar 17 17:56:37.607211 containerd[1904]: time="2025-03-17T17:56:37.607088034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:37.607211 containerd[1904]: time="2025-03-17T17:56:37.607162017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:37.607211 containerd[1904]: time="2025-03-17T17:56:37.607184218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:37.610016 containerd[1904]: time="2025-03-17T17:56:37.609463153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:37.647591 systemd[1]: Started cri-containerd-ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac.scope - libcontainer container ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac. Mar 17 17:56:37.702707 containerd[1904]: time="2025-03-17T17:56:37.702665282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:09079a80-0eff-4029-a3c1-ec86f2679107,Namespace:default,Attempt:0,} returns sandbox id \"ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac\"" Mar 17 17:56:37.704937 containerd[1904]: time="2025-03-17T17:56:37.704883908Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:56:37.880446 systemd[1]: Created slice kubepods-besteffort-pod542832a2_982d_4ce6_bea6_e93e3ddc1549.slice - libcontainer container kubepods-besteffort-pod542832a2_982d_4ce6_bea6_e93e3ddc1549.slice. Mar 17 17:56:37.996066 kubelet[2362]: I0317 17:56:37.996002 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/542832a2-982d-4ce6-bea6-e93e3ddc1549-tigera-ca-bundle\") pod \"calico-kube-controllers-5595df8959-6xltz\" (UID: \"542832a2-982d-4ce6-bea6-e93e3ddc1549\") " pod="calico-system/calico-kube-controllers-5595df8959-6xltz" Mar 17 17:56:37.996066 kubelet[2362]: I0317 17:56:37.996063 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286hh\" (UniqueName: \"kubernetes.io/projected/542832a2-982d-4ce6-bea6-e93e3ddc1549-kube-api-access-286hh\") pod \"calico-kube-controllers-5595df8959-6xltz\" (UID: \"542832a2-982d-4ce6-bea6-e93e3ddc1549\") " pod="calico-system/calico-kube-controllers-5595df8959-6xltz" Mar 17 17:56:38.050282 kubelet[2362]: E0317 17:56:38.050072 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:38.096314 containerd[1904]: time="2025-03-17T17:56:38.092955338Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:38.096314 containerd[1904]: time="2025-03-17T17:56:38.094293161Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:56:38.111794 containerd[1904]: time="2025-03-17T17:56:38.111745508Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 406.820234ms" Mar 17 17:56:38.112028 containerd[1904]: time="2025-03-17T17:56:38.112002849Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:56:38.115357 containerd[1904]: time="2025-03-17T17:56:38.115322823Z" level=info msg="CreateContainer within sandbox \"ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:56:38.154890 containerd[1904]: time="2025-03-17T17:56:38.154768952Z" level=info msg="CreateContainer within sandbox \"ac2b46370f00c54dc5935eeef12d503708aff3275716153c16f13d6bc13cf3ac\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3c0caa088662e00da786ded671a7d431b37a02bab5c20e5ff4abad5bb10bcafc\"" Mar 17 17:56:38.157964 containerd[1904]: time="2025-03-17T17:56:38.156005867Z" level=info msg="StartContainer for \"3c0caa088662e00da786ded671a7d431b37a02bab5c20e5ff4abad5bb10bcafc\"" Mar 17 17:56:38.186055 containerd[1904]: time="2025-03-17T17:56:38.185209294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595df8959-6xltz,Uid:542832a2-982d-4ce6-bea6-e93e3ddc1549,Namespace:calico-system,Attempt:0,}" Mar 17 17:56:38.240547 systemd[1]: Started cri-containerd-3c0caa088662e00da786ded671a7d431b37a02bab5c20e5ff4abad5bb10bcafc.scope - libcontainer container 3c0caa088662e00da786ded671a7d431b37a02bab5c20e5ff4abad5bb10bcafc. Mar 17 17:56:38.297926 containerd[1904]: time="2025-03-17T17:56:38.297880291Z" level=info msg="StartContainer for \"3c0caa088662e00da786ded671a7d431b37a02bab5c20e5ff4abad5bb10bcafc\" returns successfully" Mar 17 17:56:38.463601 systemd-networkd[1743]: calif0feab6c722: Link UP Mar 17 17:56:38.463927 systemd-networkd[1743]: calif0feab6c722: Gained carrier Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.295 [INFO][5086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0 calico-kube-controllers-5595df8959- calico-system 542832a2-982d-4ce6-bea6-e93e3ddc1549 1469 0 2025-03-17 17:56:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5595df8959 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172.31.26.100 calico-kube-controllers-5595df8959-6xltz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif0feab6c722 [] []}} ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.297 [INFO][5086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.398 [INFO][5108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" HandleID="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Workload="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.410 [INFO][5108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" HandleID="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Workload="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e1ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.26.100", "pod":"calico-kube-controllers-5595df8959-6xltz", "timestamp":"2025-03-17 17:56:38.398576651 +0000 UTC"}, Hostname:"172.31.26.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.410 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.410 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.410 [INFO][5108] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.100' Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.413 [INFO][5108] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.418 [INFO][5108] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.423 [INFO][5108] ipam/ipam.go 489: Trying affinity for 192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.426 [INFO][5108] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.429 [INFO][5108] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.429 [INFO][5108] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.431 [INFO][5108] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0 Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.437 [INFO][5108] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.452 [INFO][5108] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.133/26] block=192.168.70.128/26 handle="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.452 [INFO][5108] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.133/26] handle="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" host="172.31.26.100" Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.453 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:56:38.508610 containerd[1904]: 2025-03-17 17:56:38.453 [INFO][5108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.133/26] IPv6=[] ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" HandleID="k8s-pod-network.3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Workload="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.509898 containerd[1904]: 2025-03-17 17:56:38.455 [INFO][5086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0", GenerateName:"calico-kube-controllers-5595df8959-", Namespace:"calico-system", SelfLink:"", UID:"542832a2-982d-4ce6-bea6-e93e3ddc1549", ResourceVersion:"1469", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5595df8959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"", Pod:"calico-kube-controllers-5595df8959-6xltz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0feab6c722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:56:38.509898 containerd[1904]: 2025-03-17 17:56:38.455 [INFO][5086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.133/32] ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.509898 containerd[1904]: 2025-03-17 17:56:38.455 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0feab6c722 ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.509898 containerd[1904]: 2025-03-17 17:56:38.461 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.509898 containerd[1904]: 2025-03-17 17:56:38.461 [INFO][5086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0", GenerateName:"calico-kube-controllers-5595df8959-", Namespace:"calico-system", SelfLink:"", UID:"542832a2-982d-4ce6-bea6-e93e3ddc1549", ResourceVersion:"1469", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5595df8959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.100", ContainerID:"3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0", Pod:"calico-kube-controllers-5595df8959-6xltz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif0feab6c722", MAC:"e2:08:ec:f9:9b:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:56:38.509898 containerd[1904]: 2025-03-17 17:56:38.499 [INFO][5086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0" Namespace="calico-system" Pod="calico-kube-controllers-5595df8959-6xltz" WorkloadEndpoint="172.31.26.100-k8s-calico--kube--controllers--5595df8959--6xltz-eth0" Mar 17 17:56:38.561446 containerd[1904]: time="2025-03-17T17:56:38.561091771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:38.561446 containerd[1904]: time="2025-03-17T17:56:38.561173850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:38.561446 containerd[1904]: time="2025-03-17T17:56:38.561196946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:38.561967 containerd[1904]: time="2025-03-17T17:56:38.561329628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:38.589630 systemd[1]: Started cri-containerd-3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0.scope - libcontainer container 3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0. Mar 17 17:56:38.645176 containerd[1904]: time="2025-03-17T17:56:38.645049675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595df8959-6xltz,Uid:542832a2-982d-4ce6-bea6-e93e3ddc1549,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0\"" Mar 17 17:56:38.691567 containerd[1904]: time="2025-03-17T17:56:38.691526143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:56:38.823511 systemd-networkd[1743]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:56:39.051116 kubelet[2362]: E0317 17:56:39.051060 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:39.784168 systemd-networkd[1743]: calif0feab6c722: Gained IPv6LL Mar 17 17:56:40.051921 kubelet[2362]: E0317 17:56:40.051539 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:41.052501 kubelet[2362]: E0317 17:56:41.052384 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:41.158346 containerd[1904]: time="2025-03-17T17:56:41.158298511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:41.162569 containerd[1904]: time="2025-03-17T17:56:41.162501242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 17 17:56:41.166537 containerd[1904]: time="2025-03-17T17:56:41.166495726Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:41.170012 containerd[1904]: time="2025-03-17T17:56:41.169942133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:41.171012 containerd[1904]: time="2025-03-17T17:56:41.170696906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.479129485s" Mar 17 17:56:41.171012 containerd[1904]: time="2025-03-17T17:56:41.170739061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 17 17:56:41.191305 containerd[1904]: time="2025-03-17T17:56:41.191229005Z" level=info msg="CreateContainer within sandbox \"3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:56:41.214429 containerd[1904]: time="2025-03-17T17:56:41.214381685Z" level=info msg="CreateContainer within sandbox \"3d1cb7853bf1f863b89a2112baff3be79441328ee99067702b9b486248dd4df0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"60dabeae6f1877779c24cf736eabaea4a6155b14dac0acc5ec9b374ac76936d2\"" Mar 17 17:56:41.216644 containerd[1904]: time="2025-03-17T17:56:41.216039215Z" level=info msg="StartContainer for \"60dabeae6f1877779c24cf736eabaea4a6155b14dac0acc5ec9b374ac76936d2\"" Mar 17 17:56:41.258513 systemd[1]: Started cri-containerd-60dabeae6f1877779c24cf736eabaea4a6155b14dac0acc5ec9b374ac76936d2.scope - libcontainer container 60dabeae6f1877779c24cf736eabaea4a6155b14dac0acc5ec9b374ac76936d2. Mar 17 17:56:41.326912 containerd[1904]: time="2025-03-17T17:56:41.326786591Z" level=info msg="StartContainer for \"60dabeae6f1877779c24cf736eabaea4a6155b14dac0acc5ec9b374ac76936d2\" returns successfully" Mar 17 17:56:41.869125 kubelet[2362]: I0317 17:56:41.868433 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=30.459323953 podStartE2EDuration="30.86840798s" podCreationTimestamp="2025-03-17 17:56:11 +0000 UTC" firstStartedPulling="2025-03-17 17:56:37.704302808 +0000 UTC m=+76.671572296" lastFinishedPulling="2025-03-17 17:56:38.113386837 +0000 UTC m=+77.080656323" observedRunningTime="2025-03-17 17:56:38.837886217 +0000 UTC m=+77.805155715" watchObservedRunningTime="2025-03-17 17:56:41.86840798 +0000 UTC m=+80.835677473" Mar 17 17:56:41.869125 kubelet[2362]: I0317 17:56:41.868984 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5595df8959-6xltz" podStartSLOduration=2.388045927 podStartE2EDuration="4.868973015s" podCreationTimestamp="2025-03-17 17:56:37 +0000 UTC" firstStartedPulling="2025-03-17 17:56:38.691157409 +0000 UTC m=+77.658426889" lastFinishedPulling="2025-03-17 17:56:41.172084487 +0000 UTC m=+80.139353977" observedRunningTime="2025-03-17 17:56:41.868793071 +0000 UTC m=+80.836062569" watchObservedRunningTime="2025-03-17 17:56:41.868973015 +0000 UTC m=+80.836242511" Mar 17 17:56:41.983396 kubelet[2362]: E0317 17:56:41.983344 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:42.053326 kubelet[2362]: E0317 17:56:42.053253 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:42.632786 ntpd[1877]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%11]:123 Mar 17 17:56:42.632878 ntpd[1877]: Listen normally on 13 calif0feab6c722 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 17 17:56:42.633436 ntpd[1877]: 17 Mar 17:56:42 ntpd[1877]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%11]:123 Mar 17 17:56:42.633436 ntpd[1877]: 17 Mar 17:56:42 ntpd[1877]: Listen normally on 13 calif0feab6c722 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 17 17:56:43.054401 kubelet[2362]: E0317 17:56:43.054347 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:44.055407 kubelet[2362]: E0317 17:56:44.055338 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:45.056261 kubelet[2362]: E0317 17:56:45.056205 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:46.057408 kubelet[2362]: E0317 17:56:46.057349 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:47.058293 kubelet[2362]: E0317 17:56:47.058230 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:48.058614 kubelet[2362]: E0317 17:56:48.058555 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:49.059497 kubelet[2362]: E0317 17:56:49.059440 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:50.059897 kubelet[2362]: E0317 17:56:50.059855 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:51.060722 kubelet[2362]: E0317 17:56:51.060653 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:52.061205 kubelet[2362]: E0317 17:56:52.061140 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:53.061969 kubelet[2362]: E0317 17:56:53.061846 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:54.062398 kubelet[2362]: E0317 17:56:54.062337 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:54.179816 kubelet[2362]: E0317 17:56:54.179572 2362 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.26.100)" Mar 17 17:56:55.063561 kubelet[2362]: E0317 17:56:55.063506 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:56.064469 kubelet[2362]: E0317 17:56:56.064423 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:57.065801 kubelet[2362]: E0317 17:56:57.065585 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:58.066444 kubelet[2362]: E0317 17:56:58.066375 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:56:59.067359 kubelet[2362]: E0317 17:56:59.067302 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:00.067751 kubelet[2362]: E0317 17:57:00.067688 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:01.068236 kubelet[2362]: E0317 17:57:01.068181 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:01.983743 kubelet[2362]: E0317 17:57:01.983690 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:02.069396 kubelet[2362]: E0317 17:57:02.069303 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:03.070010 kubelet[2362]: E0317 17:57:03.069903 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:04.070295 kubelet[2362]: E0317 17:57:04.070236 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:04.176932 kubelet[2362]: E0317 17:57:04.176656 2362 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.26.100)" Mar 17 17:57:04.177172 kubelet[2362]: E0317 17:57:04.176764 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:56:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:56:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:56:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:56:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.2\\\"],\\\"sizeBytes\\\":142241307},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.2\\\"],\\\"sizeBytes\\\":99274581},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73060009},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\\\",\\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\\\"],\\\"sizeBytes\\\":36285984},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\\\",\\\"ghcr.io/flatcar/calico/typha:v3.29.2\\\"],\\\"sizeBytes\\\":31907171},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\\\",\\\"registry.k8s.io/kube-proxy:v1.31.7\\\"],\\\"sizeBytes\\\":30353649},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\\\"],\\\"sizeBytes\\\":15479899},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.2\\\"],\\\"sizeBytes\\\":9402991},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\\\"],\\\"sizeBytes\\\":6857075},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.26.100\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes 172.31.26.100)" Mar 17 17:57:05.070684 kubelet[2362]: E0317 17:57:05.070621 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:06.071607 kubelet[2362]: E0317 17:57:06.071548 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:07.072380 kubelet[2362]: E0317 17:57:07.072320 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:08.073563 kubelet[2362]: E0317 17:57:08.073437 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:08.220470 systemd[1]: run-containerd-runc-k8s.io-60dabeae6f1877779c24cf736eabaea4a6155b14dac0acc5ec9b374ac76936d2-runc.Ndij6H.mount: Deactivated successfully. Mar 17 17:57:09.074567 kubelet[2362]: E0317 17:57:09.074511 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:10.075334 kubelet[2362]: E0317 17:57:10.075281 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:11.080533 kubelet[2362]: E0317 17:57:11.080386 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:12.080738 kubelet[2362]: E0317 17:57:12.080683 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:13.081732 kubelet[2362]: E0317 17:57:13.081688 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:14.082162 kubelet[2362]: E0317 17:57:14.082090 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:14.174264 kubelet[2362]: E0317 17:57:14.174213 2362 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.26.100)" Mar 17 17:57:14.174469 kubelet[2362]: E0317 17:57:14.174213 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.26.100\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes 172.31.26.100)" Mar 17 17:57:15.082895 kubelet[2362]: E0317 17:57:15.082835 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:16.083883 kubelet[2362]: E0317 17:57:16.083841 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:17.084982 kubelet[2362]: E0317 17:57:17.084891 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:18.087684 kubelet[2362]: E0317 17:57:18.085988 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:19.088095 kubelet[2362]: E0317 17:57:19.088044 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:20.088924 kubelet[2362]: E0317 17:57:20.088869 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:21.090301 kubelet[2362]: E0317 17:57:21.089084 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:21.983446 kubelet[2362]: E0317 17:57:21.983340 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:22.089634 kubelet[2362]: E0317 17:57:22.089581 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:23.089973 kubelet[2362]: E0317 17:57:23.089918 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:24.090384 kubelet[2362]: E0317 17:57:24.090328 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:24.171737 kubelet[2362]: E0317 17:57:24.171510 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.26.100\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes 172.31.26.100)" Mar 17 17:57:24.171737 kubelet[2362]: E0317 17:57:24.171568 2362 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.26.100)" Mar 17 17:57:25.090963 kubelet[2362]: E0317 17:57:25.090889 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:26.091588 kubelet[2362]: E0317 17:57:26.091532 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:27.092240 kubelet[2362]: E0317 17:57:27.092184 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:27.476199 kubelet[2362]: E0317 17:57:27.471731 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.215:6443/api/v1/namespaces/calico-system/events\": unexpected EOF" event=< Mar 17 17:57:27.476199 kubelet[2362]: &Event{ObjectMeta:{calico-kube-controllers-5595df8959-6xltz.182da8ccc29b7da8 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-5595df8959-6xltz,UID:542832a2-982d-4ce6-bea6-e93e3ddc1549,APIVersion:v1,ResourceVersion:1465,FieldPath:spec.containers{calico-kube-controllers},},Reason:Unhealthy,Message:Readiness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded; Error reaching apiserver: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded with http status code: 500 Mar 17 17:57:27.476199 kubelet[2362]: ,Source:EventSource{Component:kubelet,Host:172.31.26.100,},FirstTimestamp:2025-03-17 17:57:08.254014888 +0000 UTC m=+107.221284379,LastTimestamp:2025-03-17 17:57:08.254014888 +0000 UTC m=+107.221284379,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.26.100,} Mar 17 17:57:27.476199 kubelet[2362]: > Mar 17 17:57:27.476591 kubelet[2362]: E0317 17:57:27.476552 2362 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.100?timeout=10s\": unexpected EOF" Mar 17 17:57:27.479197 kubelet[2362]: I0317 17:57:27.476587 2362 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 17 17:57:27.485634 kubelet[2362]: E0317 17:57:27.485574 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.100?timeout=10s\": dial tcp 172.31.21.215:6443: connect: connection refused" interval="200ms" Mar 17 17:57:27.686995 kubelet[2362]: E0317 17:57:27.686923 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.100?timeout=10s\": dial tcp 172.31.21.215:6443: connect: connection refused" interval="400ms" Mar 17 17:57:28.089456 kubelet[2362]: E0317 17:57:28.088983 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.100?timeout=10s\": dial tcp 172.31.21.215:6443: connect: connection refused" interval="800ms" Mar 17 17:57:28.092782 kubelet[2362]: E0317 17:57:28.092723 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:28.481625 kubelet[2362]: E0317 17:57:28.481357 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.26.100\": Get \"https://172.31.21.215:6443/api/v1/nodes/172.31.26.100?timeout=10s\": dial tcp 172.31.21.215:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Mar 17 17:57:28.485559 kubelet[2362]: E0317 17:57:28.482573 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.26.100\": Get \"https://172.31.21.215:6443/api/v1/nodes/172.31.26.100?timeout=10s\": dial tcp 172.31.21.215:6443: connect: connection refused" Mar 17 17:57:28.485559 kubelet[2362]: E0317 17:57:28.482601 2362 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Mar 17 17:57:28.486768 kubelet[2362]: I0317 17:57:28.486410 2362 status_manager.go:851] "Failed to get status for pod" podUID="7c6e4467-ce2e-4997-b4da-5e7d786827e5" pod="calico-system/calico-node-2gs88" err="Get \"https://172.31.21.215:6443/api/v1/namespaces/calico-system/pods/calico-node-2gs88\": dial tcp 172.31.21.215:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Mar 17 17:57:28.488087 kubelet[2362]: I0317 17:57:28.488034 2362 status_manager.go:851] "Failed to get status for pod" podUID="7c6e4467-ce2e-4997-b4da-5e7d786827e5" pod="calico-system/calico-node-2gs88" err="Get \"https://172.31.21.215:6443/api/v1/namespaces/calico-system/pods/calico-node-2gs88\": dial tcp 172.31.21.215:6443: connect: connection refused" Mar 17 17:57:29.093349 kubelet[2362]: E0317 17:57:29.093287 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:30.093579 kubelet[2362]: E0317 17:57:30.093516 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:31.094167 kubelet[2362]: E0317 17:57:31.094112 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:32.094368 kubelet[2362]: E0317 17:57:32.094311 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:33.094936 kubelet[2362]: E0317 17:57:33.094882 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:34.095955 kubelet[2362]: E0317 17:57:34.095900 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:35.096725 kubelet[2362]: E0317 17:57:35.096676 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:36.097554 kubelet[2362]: E0317 17:57:36.097443 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:37.097953 kubelet[2362]: E0317 17:57:37.097900 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:38.098748 kubelet[2362]: E0317 17:57:38.098691 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:38.890262 kubelet[2362]: E0317 17:57:38.890208 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.100?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 17 17:57:39.099327 kubelet[2362]: E0317 17:57:39.099265 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:40.100411 kubelet[2362]: E0317 17:57:40.100356 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:41.100896 kubelet[2362]: E0317 17:57:41.100840 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:41.983835 kubelet[2362]: E0317 17:57:41.983782 2362 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:42.101751 kubelet[2362]: E0317 17:57:42.101705 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:43.102782 kubelet[2362]: E0317 17:57:43.102649 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:44.102986 kubelet[2362]: E0317 17:57:44.102927 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:45.103804 kubelet[2362]: E0317 17:57:45.103756 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:46.104208 kubelet[2362]: E0317 17:57:46.104152 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:57:47.105308 kubelet[2362]: E0317 17:57:47.105237 2362 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"