Apr 30 03:20:58.008122 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:20:58.008155 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:20:58.008171 kernel: BIOS-provided physical RAM map: Apr 30 03:20:58.008180 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:20:58.008188 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 03:20:58.008196 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 03:20:58.008206 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 03:20:58.008215 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 03:20:58.008224 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 30 03:20:58.008232 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 30 03:20:58.008244 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 30 03:20:58.008252 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 30 03:20:58.008265 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 30 03:20:58.008274 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 30 03:20:58.008288 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 30 03:20:58.008298 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 03:20:58.008311 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 30 03:20:58.008320 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 30 03:20:58.008329 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 03:20:58.008338 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:20:58.008347 kernel: NX (Execute Disable) protection: active Apr 30 03:20:58.008356 kernel: APIC: Static calls initialized Apr 30 03:20:58.008365 kernel: efi: EFI v2.7 by EDK II Apr 30 03:20:58.008374 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 30 03:20:58.008383 kernel: SMBIOS 2.8 present. Apr 30 03:20:58.008393 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 30 03:20:58.008401 kernel: Hypervisor detected: KVM Apr 30 03:20:58.008414 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:20:58.008438 kernel: kvm-clock: using sched offset of 5541676251 cycles Apr 30 03:20:58.008449 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:20:58.008458 kernel: tsc: Detected 2794.748 MHz processor Apr 30 03:20:58.008468 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:20:58.008478 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:20:58.008488 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Apr 30 03:20:58.008497 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:20:58.008507 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:20:58.008520 kernel: Using GB pages for direct mapping Apr 30 03:20:58.008530 kernel: Secure boot disabled Apr 30 03:20:58.008539 kernel: ACPI: Early table checksum verification disabled Apr 30 03:20:58.008549 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 30 03:20:58.008564 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 30 03:20:58.008574 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:20:58.008584 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:20:58.008599 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 30 03:20:58.008610 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:20:58.008625 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:20:58.008635 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:20:58.008645 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:20:58.008663 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 03:20:58.008673 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 30 03:20:58.008687 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Apr 30 03:20:58.008697 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 30 03:20:58.008707 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 30 03:20:58.008717 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 30 03:20:58.008727 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 30 03:20:58.008737 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 30 03:20:58.008747 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 30 03:20:58.008756 kernel: No NUMA configuration found Apr 30 03:20:58.008769 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 30 03:20:58.008782 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 30 03:20:58.008792 kernel: Zone ranges: Apr 30 03:20:58.008802 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:20:58.008812 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 30 03:20:58.008822 kernel: Normal empty Apr 30 03:20:58.008832 kernel: Movable zone start for each node Apr 30 03:20:58.008841 kernel: Early memory node ranges Apr 30 03:20:58.008851 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:20:58.008880 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 30 03:20:58.008906 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 30 03:20:58.008916 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 30 03:20:58.008927 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 30 03:20:58.008937 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 30 03:20:58.008950 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 30 03:20:58.008960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:20:58.008975 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:20:58.008985 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 30 03:20:58.008995 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:20:58.009005 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 30 03:20:58.009019 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:20:58.009029 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 30 03:20:58.009039 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:20:58.009049 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:20:58.009059 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:20:58.009069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:20:58.009079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:20:58.009088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:20:58.009098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:20:58.009112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:20:58.009122 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:20:58.009132 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:20:58.009141 kernel: TSC deadline timer available Apr 30 03:20:58.009151 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 03:20:58.009161 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:20:58.009171 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 03:20:58.009181 kernel: kvm-guest: setup PV sched yield Apr 30 03:20:58.009191 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 30 03:20:58.009204 kernel: Booting paravirtualized kernel on KVM Apr 30 03:20:58.009214 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:20:58.009225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 03:20:58.009234 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 03:20:58.009244 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 03:20:58.009254 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 03:20:58.009264 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:20:58.009274 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:20:58.009285 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:20:58.009303 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:20:58.009313 kernel: random: crng init done Apr 30 03:20:58.009323 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:20:58.009333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:20:58.009342 kernel: Fallback order for Node 0: 0 Apr 30 03:20:58.009352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 30 03:20:58.009362 kernel: Policy zone: DMA32 Apr 30 03:20:58.009372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:20:58.009386 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 171124K reserved, 0K cma-reserved) Apr 30 03:20:58.009396 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 03:20:58.009406 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:20:58.009416 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:20:58.009441 kernel: Dynamic Preempt: voluntary Apr 30 03:20:58.009462 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:20:58.009477 kernel: rcu: RCU event tracing is enabled. Apr 30 03:20:58.009488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 03:20:58.009499 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:20:58.009509 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:20:58.009520 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:20:58.009530 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:20:58.009544 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 03:20:58.009555 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 03:20:58.009568 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:20:58.009579 kernel: Console: colour dummy device 80x25 Apr 30 03:20:58.009589 kernel: printk: console [ttyS0] enabled Apr 30 03:20:58.009604 kernel: ACPI: Core revision 20230628 Apr 30 03:20:58.009615 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:20:58.009625 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:20:58.009635 kernel: x2apic enabled Apr 30 03:20:58.009646 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:20:58.009665 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 03:20:58.009676 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 03:20:58.009686 kernel: kvm-guest: setup PV IPIs Apr 30 03:20:58.009697 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:20:58.009711 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:20:58.009721 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 03:20:58.009732 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:20:58.009742 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:20:58.009753 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:20:58.009763 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:20:58.009774 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:20:58.009784 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:20:58.009795 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:20:58.009808 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:20:58.009819 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:20:58.009829 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:20:58.009840 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:20:58.009853 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 03:20:58.009865 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 03:20:58.009875 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 03:20:58.009886 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:20:58.009900 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:20:58.009911 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:20:58.009921 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:20:58.009932 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:20:58.009943 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:20:58.009953 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:20:58.009963 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:20:58.009974 kernel: landlock: Up and running. Apr 30 03:20:58.009984 kernel: SELinux: Initializing. Apr 30 03:20:58.009998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:20:58.010008 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:20:58.010019 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:20:58.010029 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:20:58.010040 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:20:58.010050 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:20:58.010061 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:20:58.010071 kernel: ... version: 0 Apr 30 03:20:58.010081 kernel: ... bit width: 48 Apr 30 03:20:58.010096 kernel: ... generic registers: 6 Apr 30 03:20:58.010106 kernel: ... value mask: 0000ffffffffffff Apr 30 03:20:58.010116 kernel: ... max period: 00007fffffffffff Apr 30 03:20:58.010127 kernel: ... fixed-purpose events: 0 Apr 30 03:20:58.010137 kernel: ... event mask: 000000000000003f Apr 30 03:20:58.010147 kernel: signal: max sigframe size: 1776 Apr 30 03:20:58.010157 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:20:58.010168 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:20:58.010178 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:20:58.010192 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:20:58.010203 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 03:20:58.010213 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 03:20:58.010223 kernel: smpboot: Max logical packages: 1 Apr 30 03:20:58.010234 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 03:20:58.010244 kernel: devtmpfs: initialized Apr 30 03:20:58.010255 kernel: x86/mm: Memory block size: 128MB Apr 30 03:20:58.010265 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 30 03:20:58.010276 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 30 03:20:58.010291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 30 03:20:58.010301 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 30 03:20:58.010312 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 30 03:20:58.010323 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:20:58.010333 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 03:20:58.010344 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:20:58.010354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:20:58.010365 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:20:58.010375 kernel: audit: type=2000 audit(1745983257.105:1): state=initialized audit_enabled=0 res=1 Apr 30 03:20:58.010390 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:20:58.010400 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:20:58.010411 kernel: cpuidle: using governor menu Apr 30 03:20:58.010626 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:20:58.010641 kernel: dca service started, version 1.12.1 Apr 30 03:20:58.010652 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:20:58.010673 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 03:20:58.010684 kernel: PCI: Using configuration type 1 for base access Apr 30 03:20:58.010694 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:20:58.010711 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:20:58.010721 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:20:58.010732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:20:58.010743 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:20:58.010753 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:20:58.010764 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:20:58.010774 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:20:58.010785 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:20:58.010796 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:20:58.010811 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:20:58.010821 kernel: ACPI: Interpreter enabled Apr 30 03:20:58.010831 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 03:20:58.010842 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:20:58.010852 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:20:58.010863 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:20:58.010873 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:20:58.010883 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:20:58.011163 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:20:58.011345 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:20:58.011539 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:20:58.011556 kernel: PCI host bridge to bus 0000:00 Apr 30 03:20:58.011765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:20:58.011924 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:20:58.012079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:20:58.012236 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 30 03:20:58.012365 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:20:58.012560 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 30 03:20:58.012730 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:20:58.012961 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:20:58.013165 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 03:20:58.013354 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 30 03:20:58.013582 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 30 03:20:58.013803 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:20:58.013976 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 30 03:20:58.014155 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:20:58.014361 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 03:20:58.014592 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 30 03:20:58.016218 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 30 03:20:58.016399 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 30 03:20:58.016729 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:20:58.016902 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 30 03:20:58.017072 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 30 03:20:58.017243 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 30 03:20:58.017448 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:20:58.017629 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 30 03:20:58.019373 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 30 03:20:58.019567 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 30 03:20:58.019760 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 30 03:20:58.019962 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:20:58.020133 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:20:58.020331 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:20:58.020537 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 30 03:20:58.020738 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 30 03:20:58.020977 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:20:58.021153 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 30 03:20:58.021171 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:20:58.021182 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:20:58.021193 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:20:58.021210 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:20:58.021221 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:20:58.021233 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:20:58.021244 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:20:58.021255 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:20:58.021266 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:20:58.021277 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:20:58.021288 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:20:58.021299 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:20:58.021315 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:20:58.021325 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:20:58.021336 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:20:58.021346 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:20:58.021358 kernel: iommu: Default domain type: Translated Apr 30 03:20:58.021369 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:20:58.021380 kernel: efivars: Registered efivars operations Apr 30 03:20:58.021391 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:20:58.021401 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:20:58.021416 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 30 03:20:58.021476 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 30 03:20:58.021487 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 30 03:20:58.021498 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 30 03:20:58.021678 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:20:58.021852 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:20:58.022028 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:20:58.022045 kernel: vgaarb: loaded Apr 30 03:20:58.022056 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:20:58.022073 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:20:58.022084 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:20:58.022094 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:20:58.022105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:20:58.022115 kernel: pnp: PnP ACPI init Apr 30 03:20:58.022275 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:20:58.022288 kernel: pnp: PnP ACPI: found 6 devices Apr 30 03:20:58.022296 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:20:58.022308 kernel: NET: Registered PF_INET protocol family Apr 30 03:20:58.022316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:20:58.022324 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 03:20:58.022333 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:20:58.022340 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:20:58.022348 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 03:20:58.022356 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 03:20:58.022364 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:20:58.022372 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:20:58.022383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:20:58.022390 kernel: NET: Registered PF_XDP protocol family Apr 30 03:20:58.022539 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 30 03:20:58.022678 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 30 03:20:58.022799 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:20:58.022938 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:20:58.023096 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:20:58.023260 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 30 03:20:58.023469 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:20:58.023626 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 30 03:20:58.023641 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:20:58.023652 kernel: Initialise system trusted keyrings Apr 30 03:20:58.023674 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 03:20:58.023685 kernel: Key type asymmetric registered Apr 30 03:20:58.023696 kernel: Asymmetric key parser 'x509' registered Apr 30 03:20:58.023706 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:20:58.023717 kernel: io scheduler mq-deadline registered Apr 30 03:20:58.023736 kernel: io scheduler kyber registered Apr 30 03:20:58.023746 kernel: io scheduler bfq registered Apr 30 03:20:58.023757 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:20:58.023769 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:20:58.023781 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:20:58.023792 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 03:20:58.023803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:20:58.023814 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:20:58.023825 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:20:58.023837 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:20:58.023845 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:20:58.024001 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 03:20:58.024014 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:20:58.024155 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 03:20:58.024312 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T03:20:57 UTC (1745983257) Apr 30 03:20:58.024518 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:20:58.024541 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:20:58.024549 kernel: efifb: probing for efifb Apr 30 03:20:58.024557 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 30 03:20:58.024564 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 30 03:20:58.024572 kernel: efifb: scrolling: redraw Apr 30 03:20:58.024580 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 30 03:20:58.024588 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:20:58.024616 kernel: fb0: EFI VGA frame buffer device Apr 30 03:20:58.024627 kernel: pstore: Using crash dump compression: deflate Apr 30 03:20:58.024638 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:20:58.024646 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:20:58.024663 kernel: Segment Routing with IPv6 Apr 30 03:20:58.024672 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:20:58.024681 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:20:58.024689 kernel: Key type dns_resolver registered Apr 30 03:20:58.024697 kernel: IPI shorthand broadcast: enabled Apr 30 03:20:58.024706 kernel: sched_clock: Marking stable (1104002962, 124512765)->(1282291192, -53775465) Apr 30 03:20:58.024715 kernel: registered taskstats version 1 Apr 30 03:20:58.024724 kernel: Loading compiled-in X.509 certificates Apr 30 03:20:58.024737 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:20:58.024745 kernel: Key type .fscrypt registered Apr 30 03:20:58.024753 kernel: Key type fscrypt-provisioning registered Apr 30 03:20:58.024763 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:20:58.024774 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:20:58.024786 kernel: ima: No architecture policies found Apr 30 03:20:58.024797 kernel: clk: Disabling unused clocks Apr 30 03:20:58.024808 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:20:58.024823 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:20:58.024834 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:20:58.024845 kernel: Run /init as init process Apr 30 03:20:58.024855 kernel: with arguments: Apr 30 03:20:58.024864 kernel: /init Apr 30 03:20:58.024874 kernel: with environment: Apr 30 03:20:58.024893 kernel: HOME=/ Apr 30 03:20:58.024906 kernel: TERM=linux Apr 30 03:20:58.024917 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:20:58.024937 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:20:58.024952 systemd[1]: Detected virtualization kvm. Apr 30 03:20:58.024964 systemd[1]: Detected architecture x86-64. Apr 30 03:20:58.024977 systemd[1]: Running in initrd. Apr 30 03:20:58.024994 systemd[1]: No hostname configured, using default hostname. Apr 30 03:20:58.025006 systemd[1]: Hostname set to . Apr 30 03:20:58.025036 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:20:58.025048 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:20:58.025060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:20:58.025073 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:20:58.026383 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:20:58.026400 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:20:58.026420 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:20:58.026445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:20:58.026460 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:20:58.026473 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:20:58.026485 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:20:58.026497 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:20:58.026509 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:20:58.026526 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:20:58.026538 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:20:58.026550 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:20:58.026562 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:20:58.026574 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:20:58.026587 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:20:58.026606 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:20:58.026629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:20:58.026672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:20:58.026707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:20:58.026729 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:20:58.026750 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:20:58.026762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:20:58.026791 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:20:58.026804 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:20:58.026816 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:20:58.026828 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:20:58.026845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:20:58.026857 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:20:58.026869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:20:58.026881 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:20:58.026927 systemd-journald[193]: Collecting audit messages is disabled. Apr 30 03:20:58.026962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:20:58.026974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:20:58.026987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:20:58.027003 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:20:58.027015 systemd-journald[193]: Journal started Apr 30 03:20:58.027040 systemd-journald[193]: Runtime Journal (/run/log/journal/0b729922a3ff495797564fa067d180c6) is 6.0M, max 48.3M, 42.2M free. Apr 30 03:20:58.006559 systemd-modules-load[194]: Inserted module 'overlay' Apr 30 03:20:58.029885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:20:58.035005 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:20:58.039643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:20:58.039716 kernel: Bridge firewalling registered Apr 30 03:20:58.040193 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 30 03:20:58.042157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:20:58.045819 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:20:58.048116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:20:58.052888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:20:58.056912 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:20:58.059646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:20:58.064966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:20:58.074119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:20:58.077217 dracut-cmdline[222]: dracut-dracut-053 Apr 30 03:20:58.078179 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:20:58.091592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:20:58.123084 systemd-resolved[240]: Positive Trust Anchors: Apr 30 03:20:58.123107 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:20:58.123138 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:20:58.125927 systemd-resolved[240]: Defaulting to hostname 'linux'. Apr 30 03:20:58.127316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:20:58.133249 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:20:58.174484 kernel: SCSI subsystem initialized Apr 30 03:20:58.184475 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:20:58.196478 kernel: iscsi: registered transport (tcp) Apr 30 03:20:58.222476 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:20:58.222558 kernel: QLogic iSCSI HBA Driver Apr 30 03:20:58.290351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:20:58.300668 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:20:58.329535 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:20:58.329628 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:20:58.329646 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:20:58.378508 kernel: raid6: avx2x4 gen() 12000 MB/s Apr 30 03:20:58.395511 kernel: raid6: avx2x2 gen() 17003 MB/s Apr 30 03:20:58.412701 kernel: raid6: avx2x1 gen() 18575 MB/s Apr 30 03:20:58.412821 kernel: raid6: using algorithm avx2x1 gen() 18575 MB/s Apr 30 03:20:58.430667 kernel: raid6: .... xor() 12563 MB/s, rmw enabled Apr 30 03:20:58.430790 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:20:58.455475 kernel: xor: automatically using best checksumming function avx Apr 30 03:20:58.621485 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:20:58.639812 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:20:58.651927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:20:58.674133 systemd-udevd[412]: Using default interface naming scheme 'v255'. Apr 30 03:20:58.681577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:20:58.693673 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:20:58.715116 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Apr 30 03:20:58.750675 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:20:58.785688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:20:58.857195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:20:58.867651 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:20:58.885499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:20:58.926777 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:20:58.929185 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:20:58.930761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:20:58.937404 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:20:58.944493 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 03:20:59.029570 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 03:20:59.029742 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:20:59.029767 kernel: GPT:9289727 != 19775487 Apr 30 03:20:59.029793 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:20:59.029807 kernel: GPT:9289727 != 19775487 Apr 30 03:20:59.029818 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:20:59.029829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:20:59.029847 kernel: libata version 3.00 loaded. Apr 30 03:20:59.029859 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:20:58.944754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:20:58.951033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:20:59.033907 kernel: AES CTR mode by8 optimization enabled Apr 30 03:20:58.951195 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:20:58.952889 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:20:59.040486 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:20:59.116818 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:20:59.116839 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:20:59.117257 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:20:59.117406 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) Apr 30 03:20:59.117418 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Apr 30 03:20:59.117473 kernel: scsi host0: ahci Apr 30 03:20:59.117647 kernel: scsi host1: ahci Apr 30 03:20:59.117813 kernel: scsi host2: ahci Apr 30 03:20:59.117985 kernel: scsi host3: ahci Apr 30 03:20:59.118169 kernel: scsi host4: ahci Apr 30 03:20:59.118319 kernel: scsi host5: ahci Apr 30 03:20:59.118495 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 30 03:20:59.118507 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 30 03:20:59.118518 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 30 03:20:59.118528 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 30 03:20:59.118539 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 30 03:20:59.118555 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 30 03:20:58.954058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:20:58.954224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:20:58.955650 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:20:59.031350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:20:59.037062 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:20:59.070087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:20:59.119031 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:20:59.127077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:20:59.134376 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:20:59.137816 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:20:59.159581 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:20:59.178073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:20:59.178148 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:20:59.180722 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:20:59.183578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:20:59.202751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:20:59.261770 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:20:59.319115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:20:59.442470 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:20:59.442557 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 03:20:59.443467 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:20:59.444450 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:20:59.444494 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:20:59.445504 kernel: ata3.00: applying bridge limits Apr 30 03:20:59.446453 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:20:59.446482 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:20:59.447456 kernel: ata3.00: configured for UDMA/100 Apr 30 03:20:59.448460 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:20:59.488030 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:20:59.500198 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:20:59.500235 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:20:59.524163 disk-uuid[566]: Primary Header is updated. Apr 30 03:20:59.524163 disk-uuid[566]: Secondary Entries is updated. Apr 30 03:20:59.524163 disk-uuid[566]: Secondary Header is updated. Apr 30 03:20:59.530438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:20:59.535476 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:21:00.546487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:21:00.546892 disk-uuid[582]: The operation has completed successfully. Apr 30 03:21:00.583450 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:21:00.583653 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:21:00.607843 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:21:00.611551 sh[598]: Success Apr 30 03:21:00.625452 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:21:00.663662 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:21:00.677619 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:21:00.680757 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:21:00.695006 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:21:00.695089 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:21:00.695108 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:21:00.696194 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:21:00.696969 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:21:00.702512 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:21:00.705591 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:21:00.725684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:21:00.728816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:21:00.739784 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:21:00.739830 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:21:00.739845 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:21:00.743463 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:21:00.755955 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:21:00.758310 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:21:00.807826 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:21:00.812748 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:21:00.870968 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:21:00.880053 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:21:00.902136 ignition[730]: Ignition 2.19.0 Apr 30 03:21:00.902149 ignition[730]: Stage: fetch-offline Apr 30 03:21:00.902210 ignition[730]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:21:00.902235 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:21:00.907713 systemd-networkd[779]: lo: Link UP Apr 30 03:21:00.902370 ignition[730]: parsed url from cmdline: "" Apr 30 03:21:00.907718 systemd-networkd[779]: lo: Gained carrier Apr 30 03:21:00.902376 ignition[730]: no config URL provided Apr 30 03:21:00.909529 systemd-networkd[779]: Enumeration completed Apr 30 03:21:00.902383 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:21:00.909662 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:21:00.902396 ignition[730]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:21:00.910008 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:21:00.902446 ignition[730]: op(1): [started] loading QEMU firmware config module Apr 30 03:21:00.910014 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:21:00.902452 ignition[730]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 03:21:00.911908 systemd[1]: Reached target network.target - Network. Apr 30 03:21:00.914228 ignition[730]: op(1): [finished] loading QEMU firmware config module Apr 30 03:21:00.911962 systemd-networkd[779]: eth0: Link UP Apr 30 03:21:00.915083 ignition[730]: parsing config with SHA512: 1fd438d8ccc29121f4fa25b0d5fc3033a8f21c8d20d7ec300451cb0736a5b3c4c584cc35c02940c3cc3072a1dd246b2771d816c83c75a298f7fca673c6c93941 Apr 30 03:21:00.911968 systemd-networkd[779]: eth0: Gained carrier Apr 30 03:21:00.918068 ignition[730]: fetch-offline: fetch-offline passed Apr 30 03:21:00.911977 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:21:00.918158 ignition[730]: Ignition finished successfully Apr 30 03:21:00.917770 unknown[730]: fetched base config from "system" Apr 30 03:21:00.917779 unknown[730]: fetched user config from "qemu" Apr 30 03:21:00.920487 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:21:00.922376 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 03:21:00.935522 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:21:00.935665 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:21:00.954704 ignition[789]: Ignition 2.19.0 Apr 30 03:21:00.954715 ignition[789]: Stage: kargs Apr 30 03:21:00.954901 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:21:00.954913 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:21:00.956703 ignition[789]: kargs: kargs passed Apr 30 03:21:00.956751 ignition[789]: Ignition finished successfully Apr 30 03:21:00.960745 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:21:00.973827 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:21:00.994529 ignition[797]: Ignition 2.19.0 Apr 30 03:21:00.994543 ignition[797]: Stage: disks Apr 30 03:21:00.994770 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:21:00.994784 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:21:00.998175 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:21:00.995416 ignition[797]: disks: disks passed Apr 30 03:21:00.999879 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:21:00.995481 ignition[797]: Ignition finished successfully Apr 30 03:21:01.002074 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:21:01.003494 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:21:01.005268 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:21:01.005353 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:21:01.016705 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:21:01.029720 systemd-resolved[240]: Detected conflict on linux IN A 10.0.0.31 Apr 30 03:21:01.029742 systemd-resolved[240]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Apr 30 03:21:01.033678 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:21:01.047205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:21:01.056733 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:21:01.145458 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:21:01.146550 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:21:01.147494 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:21:01.160548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:21:01.163461 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:21:01.168805 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Apr 30 03:21:01.165319 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:21:01.175097 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:21:01.175120 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:21:01.175135 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:21:01.175150 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:21:01.165364 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:21:01.165387 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:21:01.183618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:21:01.187897 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:21:01.189928 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:21:01.232085 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:21:01.236904 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:21:01.243474 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:21:01.249502 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:21:01.348066 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:21:01.359603 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:21:01.361802 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:21:01.369484 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:21:01.693942 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:21:02.094854 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:21:02.103518 ignition[929]: INFO : Ignition 2.19.0 Apr 30 03:21:02.103518 ignition[929]: INFO : Stage: mount Apr 30 03:21:02.105680 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:21:02.105680 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:21:02.105680 ignition[929]: INFO : mount: mount passed Apr 30 03:21:02.105680 ignition[929]: INFO : Ignition finished successfully Apr 30 03:21:02.112172 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:21:02.124685 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:21:02.133997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:21:02.149479 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Apr 30 03:21:02.151905 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:21:02.151938 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:21:02.151950 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:21:02.156527 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:21:02.159646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:21:02.187452 ignition[962]: INFO : Ignition 2.19.0 Apr 30 03:21:02.187452 ignition[962]: INFO : Stage: files Apr 30 03:21:02.189392 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:21:02.189392 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:21:02.189392 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:21:02.193881 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:21:02.193881 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:21:02.200799 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:21:02.203615 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:21:02.203615 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:21:02.202187 unknown[962]: wrote ssh authorized keys file for user: core Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:21:02.209007 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:21:02.597377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 30 03:21:02.866974 systemd-networkd[779]: eth0: Gained IPv6LL Apr 30 03:21:03.493690 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:21:03.493690 ignition[962]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Apr 30 03:21:03.497960 ignition[962]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:21:03.500458 ignition[962]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:21:03.500458 ignition[962]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Apr 30 03:21:03.503720 ignition[962]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 03:21:03.538026 ignition[962]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:21:03.545216 ignition[962]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:21:03.547314 ignition[962]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 03:21:03.549349 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:21:03.551649 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:21:03.551649 ignition[962]: INFO : files: files passed Apr 30 03:21:03.554819 ignition[962]: INFO : Ignition finished successfully Apr 30 03:21:03.559288 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:21:03.570704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:21:03.573483 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:21:03.576758 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:21:03.576924 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:21:03.593462 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 03:21:03.598535 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:21:03.598535 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:21:03.602480 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:21:03.606581 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:21:03.608120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:21:03.622706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:21:03.655096 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:21:03.655258 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:21:03.657916 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:21:03.660178 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:21:03.662293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:21:03.672673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:21:03.689970 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:21:03.694730 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:21:03.708094 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:21:03.709590 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:21:03.712174 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:21:03.714700 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:21:03.714869 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:21:03.718563 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:21:03.722130 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:21:03.724685 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:21:03.727053 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:21:03.729839 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:21:03.732511 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:21:03.734965 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:21:03.737653 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:21:03.740244 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:21:03.742482 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:21:03.744508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:21:03.744760 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:21:03.747257 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:21:03.749075 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:21:03.751580 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:21:03.751767 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:21:03.754179 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:21:03.754386 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:21:03.757310 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:21:03.757500 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:21:03.759336 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:21:03.761699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:21:03.763757 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:21:03.765348 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:21:03.768406 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:21:03.770704 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:21:03.770865 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:21:03.772796 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:21:03.772934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:21:03.775198 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:21:03.775378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:21:03.778271 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:21:03.778461 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:21:03.790759 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:21:03.791846 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:21:03.792040 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:21:03.795392 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:21:03.796436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:21:03.796790 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:21:03.799499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:21:03.799827 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:21:03.807324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:21:03.807508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:21:03.813176 ignition[1017]: INFO : Ignition 2.19.0 Apr 30 03:21:03.813176 ignition[1017]: INFO : Stage: umount Apr 30 03:21:03.813176 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:21:03.813176 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:21:03.830386 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:21:03.834614 ignition[1017]: INFO : umount: umount passed Apr 30 03:21:03.835946 ignition[1017]: INFO : Ignition finished successfully Apr 30 03:21:03.836741 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:21:03.836912 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:21:03.838787 systemd[1]: Stopped target network.target - Network. Apr 30 03:21:03.840108 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:21:03.840198 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:21:03.842382 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:21:03.842555 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:21:03.844897 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:21:03.844955 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:21:03.847252 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:21:03.847328 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:21:03.849807 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:21:03.851966 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:21:03.855544 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 30 03:21:03.860633 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:21:03.860850 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:21:03.863570 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:21:03.863757 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:21:03.867770 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:21:03.867843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:21:03.878644 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:21:03.879737 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:21:03.879818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:21:03.882697 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:21:03.882762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:21:03.885572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:21:03.885634 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:21:03.887062 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:21:03.887121 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:21:03.890372 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:21:03.922635 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:21:03.922842 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:21:03.925686 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:21:03.925902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:21:03.928545 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:21:03.928644 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:21:03.930878 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:21:03.930920 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:21:03.932880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:21:03.932933 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:21:03.935053 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:21:03.935104 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:21:03.937033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:21:03.937096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:21:03.948618 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:21:03.950899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:21:03.950961 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:21:03.953256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:21:03.953309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:21:03.957001 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:21:03.957124 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:21:03.997679 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:21:03.997866 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:21:04.000392 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:21:04.002041 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:21:04.002106 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:21:04.011835 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:21:04.021317 systemd[1]: Switching root. Apr 30 03:21:04.055947 systemd-journald[193]: Journal stopped Apr 30 03:21:05.213201 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 30 03:21:05.213309 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:21:05.213354 kernel: SELinux: policy capability open_perms=1 Apr 30 03:21:05.213371 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:21:05.213387 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:21:05.213404 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:21:05.213420 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:21:05.213661 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:21:05.213679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:21:05.213695 kernel: audit: type=1403 audit(1745983264.382:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:21:05.213720 systemd[1]: Successfully loaded SELinux policy in 54.878ms. Apr 30 03:21:05.213754 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.387ms. Apr 30 03:21:05.213773 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:21:05.213798 systemd[1]: Detected virtualization kvm. Apr 30 03:21:05.213822 systemd[1]: Detected architecture x86-64. Apr 30 03:21:05.213840 systemd[1]: Detected first boot. Apr 30 03:21:05.213857 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:21:05.213874 zram_generator::config[1061]: No configuration found. Apr 30 03:21:05.213892 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:21:05.213909 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:21:05.213926 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:21:05.213951 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:21:05.213970 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:21:05.213988 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:21:05.214005 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:21:05.214022 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:21:05.214040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:21:05.214058 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:21:05.214075 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:21:05.214100 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:21:05.214124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:21:05.214142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:21:05.214160 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:21:05.214177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:21:05.214221 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:21:05.214239 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:21:05.214257 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:21:05.214274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:21:05.214299 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:21:05.214316 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:21:05.214337 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:21:05.214356 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:21:05.214376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:21:05.214394 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:21:05.214412 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:21:05.214445 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:21:05.214471 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:21:05.214617 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:21:05.214639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:21:05.214656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:21:05.214673 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:21:05.214690 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:21:05.214707 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:21:05.214724 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:21:05.214742 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:21:05.214779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:05.214797 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:21:05.214814 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:21:05.214831 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:21:05.214849 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:21:05.214866 systemd[1]: Reached target machines.target - Containers. Apr 30 03:21:05.214883 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:21:05.214900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:21:05.214925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:21:05.214943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:21:05.214961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:21:05.214978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:21:05.215001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:21:05.215019 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:21:05.215036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:21:05.215053 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:21:05.215070 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:21:05.215095 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:21:05.215112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:21:05.215130 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:21:05.215146 kernel: fuse: init (API version 7.39) Apr 30 03:21:05.215194 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:21:05.215212 kernel: loop: module loaded Apr 30 03:21:05.215228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:21:05.215246 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:21:05.215293 systemd-journald[1131]: Collecting audit messages is disabled. Apr 30 03:21:05.215333 kernel: ACPI: bus type drm_connector registered Apr 30 03:21:05.215354 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:21:05.215372 systemd-journald[1131]: Journal started Apr 30 03:21:05.215403 systemd-journald[1131]: Runtime Journal (/run/log/journal/0b729922a3ff495797564fa067d180c6) is 6.0M, max 48.3M, 42.2M free. Apr 30 03:21:04.940404 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:21:04.958053 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:21:04.958605 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:21:05.218504 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:21:05.221447 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:21:05.221483 systemd[1]: Stopped verity-setup.service. Apr 30 03:21:05.224450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:05.228518 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:21:05.230282 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:21:05.231788 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:21:05.233318 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:21:05.234819 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:21:05.236101 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:21:05.237577 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:21:05.238924 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:21:05.240609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:21:05.242273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:21:05.242588 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:21:05.244241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:21:05.244445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:21:05.246095 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:21:05.246339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:21:05.248355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:21:05.248754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:21:05.250675 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:21:05.250958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:21:05.252845 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:21:05.253145 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:21:05.255024 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:21:05.256777 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:21:05.258545 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:21:05.274499 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:21:05.289587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:21:05.292637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:21:05.294044 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:21:05.294100 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:21:05.296969 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:21:05.302671 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:21:05.308698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:21:05.314179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:21:05.317564 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:21:05.323367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:21:05.325904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:21:05.330594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:21:05.332342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:21:05.335621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:21:05.357580 systemd-journald[1131]: Time spent on flushing to /var/log/journal/0b729922a3ff495797564fa067d180c6 is 23.843ms for 978 entries. Apr 30 03:21:05.357580 systemd-journald[1131]: System Journal (/var/log/journal/0b729922a3ff495797564fa067d180c6) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:21:05.404985 systemd-journald[1131]: Received client request to flush runtime journal. Apr 30 03:21:05.344741 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:21:05.348122 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:21:05.351886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:21:05.353773 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:21:05.355676 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:21:05.359224 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:21:05.439098 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:21:05.441397 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:21:05.443726 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:21:05.447491 kernel: loop0: detected capacity change from 0 to 218376 Apr 30 03:21:05.449651 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:21:05.466763 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:21:05.468906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:21:05.475072 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:21:05.487506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:21:05.517651 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:21:05.524881 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:21:05.535717 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:21:05.541734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:21:05.542538 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:21:05.575978 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Apr 30 03:21:05.576010 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Apr 30 03:21:05.587789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:21:05.614473 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:21:05.681846 kernel: loop3: detected capacity change from 0 to 218376 Apr 30 03:21:05.693483 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:21:05.725459 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:21:05.740095 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 03:21:05.740806 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 30 03:21:05.745827 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:21:05.745846 systemd[1]: Reloading... Apr 30 03:21:05.865491 zram_generator::config[1225]: No configuration found. Apr 30 03:21:05.941867 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:21:06.074977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:21:06.156286 systemd[1]: Reloading finished in 409 ms. Apr 30 03:21:06.263638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:21:06.265384 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:21:06.317570 systemd[1]: Starting ensure-sysext.service... Apr 30 03:21:06.320507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:21:06.333689 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:21:06.333705 systemd[1]: Reloading... Apr 30 03:21:06.359656 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:21:06.360218 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:21:06.361767 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:21:06.362113 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 30 03:21:06.362203 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 30 03:21:06.366214 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:21:06.366331 systemd-tmpfiles[1265]: Skipping /boot Apr 30 03:21:06.385468 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:21:06.385851 systemd-tmpfiles[1265]: Skipping /boot Apr 30 03:21:06.440536 zram_generator::config[1306]: No configuration found. Apr 30 03:21:06.550940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:21:06.604853 systemd[1]: Reloading finished in 270 ms. Apr 30 03:21:06.627776 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:21:06.639711 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:21:06.679405 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:21:06.682899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:21:06.685811 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:21:06.692225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:21:06.697153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:21:06.701034 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:21:06.706680 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:06.706863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:21:06.708437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:21:06.712961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:21:06.730529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:21:06.732100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:21:06.737383 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:21:06.738849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:06.740706 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:21:06.743327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:21:06.744123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:21:06.747657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:21:06.748460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:21:06.751134 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:21:06.751440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:21:06.751625 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Apr 30 03:21:06.762802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:06.763414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:21:06.768899 augenrules[1360]: No rules Apr 30 03:21:06.771936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:21:06.777539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:21:06.782629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:21:06.784100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:21:06.785809 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:21:06.787823 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:06.789857 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:21:06.793065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:21:06.793538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:21:06.796185 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:21:06.798905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:21:06.799158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:21:06.801974 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:21:06.802539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:21:06.810465 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:21:06.815115 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:21:06.817437 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:21:06.828125 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:21:06.844562 systemd[1]: Finished ensure-sysext.service. Apr 30 03:21:06.848136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:06.848399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:21:06.855812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:21:06.862763 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:21:06.872645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:21:06.875828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:21:06.877300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:21:06.882173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1400) Apr 30 03:21:06.881763 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:21:06.888653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:21:06.890064 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:21:06.890120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:21:06.891142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:21:06.891410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:21:06.893703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:21:06.893949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:21:06.934096 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:21:06.934359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:21:06.954157 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:21:06.955542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:21:06.962944 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:21:06.966034 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:21:06.966155 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:21:06.988585 systemd-resolved[1334]: Positive Trust Anchors: Apr 30 03:21:06.988616 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:21:06.988661 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:21:06.993343 systemd-resolved[1334]: Defaulting to hostname 'linux'. Apr 30 03:21:06.996098 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:21:06.998047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:21:07.079479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:21:07.081889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:21:07.084473 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:21:07.110868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:21:07.118941 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 30 03:21:07.121324 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:21:07.121880 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:21:07.122190 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:21:07.131478 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:21:07.136306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:21:07.137386 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:21:07.143262 systemd-networkd[1404]: lo: Link UP Apr 30 03:21:07.143280 systemd-networkd[1404]: lo: Gained carrier Apr 30 03:21:07.147112 systemd-networkd[1404]: Enumeration completed Apr 30 03:21:07.147497 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:21:07.147856 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:21:07.147876 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:21:07.149981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:21:07.150009 systemd-networkd[1404]: eth0: Link UP Apr 30 03:21:07.150014 systemd-networkd[1404]: eth0: Gained carrier Apr 30 03:21:07.150029 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:21:07.154521 systemd[1]: Reached target network.target - Network. Apr 30 03:21:07.162668 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:21:07.165561 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:21:07.166907 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Apr 30 03:21:07.867382 systemd-resolved[1334]: Clock change detected. Flushing caches. Apr 30 03:21:07.867506 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 03:21:07.867595 systemd-timesyncd[1405]: Initial clock synchronization to Wed 2025-04-30 03:21:07.867290 UTC. Apr 30 03:21:07.990051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:21:08.010957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:21:08.011366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:21:08.011762 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:21:08.026466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:21:08.030120 kernel: kvm_amd: TSC scaling supported Apr 30 03:21:08.030162 kernel: kvm_amd: Nested Virtualization enabled Apr 30 03:21:08.030177 kernel: kvm_amd: Nested Paging enabled Apr 30 03:21:08.032274 kernel: kvm_amd: LBR virtualization supported Apr 30 03:21:08.032307 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 03:21:08.032321 kernel: kvm_amd: Virtual GIF supported Apr 30 03:21:08.061819 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:21:08.096361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:21:08.104454 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:21:08.124192 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:21:08.135050 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:21:08.196485 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:21:08.199372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:21:08.201112 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:21:08.203084 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:21:08.205202 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:21:08.207489 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:21:08.209332 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:21:08.211190 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:21:08.213032 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:21:08.213082 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:21:08.214440 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:21:08.217276 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:21:08.221939 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:21:08.230517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:21:08.234358 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:21:08.253292 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:21:08.255101 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:21:08.257962 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:21:08.274532 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:21:08.275634 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:21:08.275665 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:21:08.277237 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:21:08.279508 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:21:08.281666 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:21:08.285992 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:21:08.297566 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:21:08.299243 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:21:08.304154 jq[1444]: false Apr 30 03:21:08.305860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:21:08.322956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:21:08.331990 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:21:08.340539 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:21:08.341347 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:21:08.342987 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:21:08.343152 dbus-daemon[1443]: [system] SELinux support is enabled Apr 30 03:21:08.345608 extend-filesystems[1445]: Found loop3 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found loop4 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found loop5 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found sr0 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda1 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda2 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda3 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found usr Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda4 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda6 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda7 Apr 30 03:21:08.347147 extend-filesystems[1445]: Found vda9 Apr 30 03:21:08.347147 extend-filesystems[1445]: Checking size of /dev/vda9 Apr 30 03:21:08.401973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1397) Apr 30 03:21:08.402046 extend-filesystems[1445]: Resized partition /dev/vda9 Apr 30 03:21:08.349940 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:21:08.412829 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:21:08.353458 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:21:08.418473 update_engine[1458]: I20250430 03:21:08.385784 1458 main.cc:92] Flatcar Update Engine starting Apr 30 03:21:08.418473 update_engine[1458]: I20250430 03:21:08.397176 1458 update_check_scheduler.cc:74] Next update check in 11m56s Apr 30 03:21:08.358713 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:21:08.372414 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:21:08.419078 jq[1461]: true Apr 30 03:21:08.372843 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:21:08.377867 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:21:08.378444 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:21:08.381556 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:21:08.383069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:21:08.401727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:21:08.401796 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:21:08.425864 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 03:21:08.406887 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:21:08.406907 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:21:08.411514 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:21:08.415097 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:21:08.424442 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:21:08.424474 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:21:08.426137 systemd-logind[1452]: New seat seat0. Apr 30 03:21:08.433849 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:21:08.435331 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:21:08.441526 jq[1473]: true Apr 30 03:21:08.493516 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:21:08.536834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:21:08.547793 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 03:21:08.550534 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:21:08.558480 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:21:08.558846 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:21:08.573024 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:21:08.654040 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:21:08.660060 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:21:08.677244 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:21:08.680291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:21:08.681773 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:21:08.824234 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:21:08.824234 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:21:08.824234 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 03:21:08.831472 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Apr 30 03:21:08.827893 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:21:08.828306 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:21:08.840928 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:21:08.844020 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:21:08.848065 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:21:09.098890 containerd[1466]: time="2025-04-30T03:21:09.098676577Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:21:09.132414 containerd[1466]: time="2025-04-30T03:21:09.132317738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135155 containerd[1466]: time="2025-04-30T03:21:09.135103844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135155 containerd[1466]: time="2025-04-30T03:21:09.135136586Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:21:09.135155 containerd[1466]: time="2025-04-30T03:21:09.135151594Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:21:09.135425 containerd[1466]: time="2025-04-30T03:21:09.135391323Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:21:09.135456 containerd[1466]: time="2025-04-30T03:21:09.135435246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135538 containerd[1466]: time="2025-04-30T03:21:09.135517781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135538 containerd[1466]: time="2025-04-30T03:21:09.135533891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135864 containerd[1466]: time="2025-04-30T03:21:09.135840726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135864 containerd[1466]: time="2025-04-30T03:21:09.135860473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135939 containerd[1466]: time="2025-04-30T03:21:09.135876503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:21:09.135939 containerd[1466]: time="2025-04-30T03:21:09.135886683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.136019 containerd[1466]: time="2025-04-30T03:21:09.136000636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.136352 containerd[1466]: time="2025-04-30T03:21:09.136316328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:21:09.136480 containerd[1466]: time="2025-04-30T03:21:09.136453967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:21:09.136503 containerd[1466]: time="2025-04-30T03:21:09.136486828Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:21:09.136673 containerd[1466]: time="2025-04-30T03:21:09.136642400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:21:09.136791 containerd[1466]: time="2025-04-30T03:21:09.136751775Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:21:09.142752 containerd[1466]: time="2025-04-30T03:21:09.142672651Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:21:09.142849 containerd[1466]: time="2025-04-30T03:21:09.142788869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:21:09.142849 containerd[1466]: time="2025-04-30T03:21:09.142809077Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:21:09.142849 containerd[1466]: time="2025-04-30T03:21:09.142824536Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:21:09.142907 containerd[1466]: time="2025-04-30T03:21:09.142861345Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:21:09.143108 containerd[1466]: time="2025-04-30T03:21:09.143068984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:21:09.144255 containerd[1466]: time="2025-04-30T03:21:09.143984031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:21:09.144544 containerd[1466]: time="2025-04-30T03:21:09.144499428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:21:09.144577 containerd[1466]: time="2025-04-30T03:21:09.144544713Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:21:09.144577 containerd[1466]: time="2025-04-30T03:21:09.144562606Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:21:09.144639 containerd[1466]: time="2025-04-30T03:21:09.144580480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144639 containerd[1466]: time="2025-04-30T03:21:09.144602010Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144639 containerd[1466]: time="2025-04-30T03:21:09.144622138Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144748 containerd[1466]: time="2025-04-30T03:21:09.144658356Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144748 containerd[1466]: time="2025-04-30T03:21:09.144681910Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144748 containerd[1466]: time="2025-04-30T03:21:09.144702809Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144822 containerd[1466]: time="2025-04-30T03:21:09.144749427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144822 containerd[1466]: time="2025-04-30T03:21:09.144768863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:21:09.144822 containerd[1466]: time="2025-04-30T03:21:09.144797106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.144822 containerd[1466]: time="2025-04-30T03:21:09.144812375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144825259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144842160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144855005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144869311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144881695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144899328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144917983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144945094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144958609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144975360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.144988816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.145006459Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.145037827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.145051052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.151353 containerd[1466]: time="2025-04-30T03:21:09.145062003Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:21:09.147339 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145123428Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145139979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145153845Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145165908Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145175977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145196144Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145214669Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:21:09.152066 containerd[1466]: time="2025-04-30T03:21:09.145231641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.145504252Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.145558484Z" level=info msg="Connect containerd service" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.145605723Z" level=info msg="using legacy CRI server" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.145615200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.145768368Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.146542710Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.146763615Z" level=info msg="Start subscribing containerd event" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.146864874Z" level=info msg="Start recovering state" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.146961736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.146961896Z" level=info msg="Start event monitor" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.147011399Z" level=info msg="Start snapshots syncer" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.147021839Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.147036266Z" level=info msg="Start streaming server" Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.147023662Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:21:09.152389 containerd[1466]: time="2025-04-30T03:21:09.147180336Z" level=info msg="containerd successfully booted in 0.049770s" Apr 30 03:21:09.322050 systemd-networkd[1404]: eth0: Gained IPv6LL Apr 30 03:21:09.327027 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:21:09.420501 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:21:09.429989 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 03:21:09.433016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:21:09.435842 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:21:09.459726 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 03:21:09.460052 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 03:21:09.464393 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:21:09.466910 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:21:10.734019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:21:10.735703 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:21:10.750927 systemd[1]: Startup finished in 1.299s (kernel) + 6.589s (initrd) + 5.726s (userspace) = 13.616s. Apr 30 03:21:10.751573 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:21:11.553438 kubelet[1548]: E0430 03:21:11.553302 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:21:11.558743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:21:11.558953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:21:11.559334 systemd[1]: kubelet.service: Consumed 1.928s CPU time. Apr 30 03:21:12.800060 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:21:12.801627 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:40326.service - OpenSSH per-connection server daemon (10.0.0.1:40326). Apr 30 03:21:12.845366 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 40326 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:12.847431 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:12.860028 systemd-logind[1452]: New session 1 of user core. Apr 30 03:21:12.861480 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:21:12.871194 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:21:12.885258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:21:12.900220 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:21:12.903635 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:21:13.035697 systemd[1566]: Queued start job for default target default.target. Apr 30 03:21:13.048537 systemd[1566]: Created slice app.slice - User Application Slice. Apr 30 03:21:13.048571 systemd[1566]: Reached target paths.target - Paths. Apr 30 03:21:13.048586 systemd[1566]: Reached target timers.target - Timers. Apr 30 03:21:13.050555 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:21:13.064609 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:21:13.064872 systemd[1566]: Reached target sockets.target - Sockets. Apr 30 03:21:13.064896 systemd[1566]: Reached target basic.target - Basic System. Apr 30 03:21:13.064949 systemd[1566]: Reached target default.target - Main User Target. Apr 30 03:21:13.064989 systemd[1566]: Startup finished in 153ms. Apr 30 03:21:13.065604 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:21:13.067439 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:21:13.132431 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:40328.service - OpenSSH per-connection server daemon (10.0.0.1:40328). Apr 30 03:21:13.168581 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:13.170474 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:13.174889 systemd-logind[1452]: New session 2 of user core. Apr 30 03:21:13.185878 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:21:13.242093 sshd[1577]: pam_unix(sshd:session): session closed for user core Apr 30 03:21:13.251519 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:40328.service: Deactivated successfully. Apr 30 03:21:13.253465 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:21:13.255020 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:21:13.272006 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:40344.service - OpenSSH per-connection server daemon (10.0.0.1:40344). Apr 30 03:21:13.272989 systemd-logind[1452]: Removed session 2. Apr 30 03:21:13.301007 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 40344 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:13.302841 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:13.307023 systemd-logind[1452]: New session 3 of user core. Apr 30 03:21:13.317867 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:21:13.367848 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 30 03:21:13.384548 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:40344.service: Deactivated successfully. Apr 30 03:21:13.386241 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:21:13.387663 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:21:13.406088 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:40346.service - OpenSSH per-connection server daemon (10.0.0.1:40346). Apr 30 03:21:13.407139 systemd-logind[1452]: Removed session 3. Apr 30 03:21:13.433456 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 40346 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:13.435056 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:13.438988 systemd-logind[1452]: New session 4 of user core. Apr 30 03:21:13.448868 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:21:13.505083 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 30 03:21:13.515784 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:40346.service: Deactivated successfully. Apr 30 03:21:13.517668 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:21:13.519124 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:21:13.531087 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:40358.service - OpenSSH per-connection server daemon (10.0.0.1:40358). Apr 30 03:21:13.532176 systemd-logind[1452]: Removed session 4. Apr 30 03:21:13.561401 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 40358 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:13.563248 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:13.567594 systemd-logind[1452]: New session 5 of user core. Apr 30 03:21:13.580965 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:21:13.643983 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:21:13.644466 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:21:13.664449 sudo[1601]: pam_unix(sudo:session): session closed for user root Apr 30 03:21:13.667499 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 30 03:21:13.680975 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:40358.service: Deactivated successfully. Apr 30 03:21:13.683148 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:21:13.685008 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:21:13.694287 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:40372.service - OpenSSH per-connection server daemon (10.0.0.1:40372). Apr 30 03:21:13.695624 systemd-logind[1452]: Removed session 5. Apr 30 03:21:13.729117 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 40372 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:13.731528 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:13.736950 systemd-logind[1452]: New session 6 of user core. Apr 30 03:21:13.751081 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:21:13.808951 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:21:13.809345 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:21:13.814293 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 30 03:21:13.822940 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:21:13.823391 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:21:13.847216 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:21:13.849482 auditctl[1613]: No rules Apr 30 03:21:13.851140 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:21:13.851482 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:21:13.853594 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:21:13.889629 augenrules[1631]: No rules Apr 30 03:21:13.891794 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:21:13.893450 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 30 03:21:13.895951 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 30 03:21:13.908159 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:40372.service: Deactivated successfully. Apr 30 03:21:13.910056 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:21:13.911888 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:21:13.924003 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:40376.service - OpenSSH per-connection server daemon (10.0.0.1:40376). Apr 30 03:21:13.924973 systemd-logind[1452]: Removed session 6. Apr 30 03:21:13.953438 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 40376 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:21:13.955282 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:21:13.959941 systemd-logind[1452]: New session 7 of user core. Apr 30 03:21:13.969843 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:21:14.023885 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:21:14.024229 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:21:14.048090 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 03:21:14.071615 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 03:21:14.071883 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 03:21:14.661844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:21:14.662083 systemd[1]: kubelet.service: Consumed 1.928s CPU time. Apr 30 03:21:14.680074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:21:14.714253 systemd[1]: Reloading requested from client PID 1683 ('systemctl') (unit session-7.scope)... Apr 30 03:21:14.714276 systemd[1]: Reloading... Apr 30 03:21:14.817760 zram_generator::config[1722]: No configuration found. Apr 30 03:21:15.468372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:21:15.546792 systemd[1]: Reloading finished in 831 ms. Apr 30 03:21:15.610406 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:21:15.610512 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:21:15.610845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:21:15.613652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:21:15.788485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:21:15.794248 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:21:15.851480 kubelet[1770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:21:15.851480 kubelet[1770]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:21:15.851480 kubelet[1770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:21:15.852050 kubelet[1770]: I0430 03:21:15.851522 1770 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:21:16.329806 kubelet[1770]: I0430 03:21:16.329699 1770 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:21:16.329806 kubelet[1770]: I0430 03:21:16.329792 1770 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:21:16.330202 kubelet[1770]: I0430 03:21:16.330165 1770 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:21:16.358698 kubelet[1770]: I0430 03:21:16.358537 1770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:21:16.366582 kubelet[1770]: E0430 03:21:16.366488 1770 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:21:16.366582 kubelet[1770]: I0430 03:21:16.366555 1770 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:21:16.374163 kubelet[1770]: I0430 03:21:16.374117 1770 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:21:16.376297 kubelet[1770]: I0430 03:21:16.376037 1770 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:21:16.376481 kubelet[1770]: I0430 03:21:16.376285 1770 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.31","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:21:16.376624 kubelet[1770]: I0430 03:21:16.376485 1770 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:21:16.376624 kubelet[1770]: I0430 03:21:16.376496 1770 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:21:16.376749 kubelet[1770]: I0430 03:21:16.376709 1770 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:21:16.380915 kubelet[1770]: I0430 03:21:16.380876 1770 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:21:16.380915 kubelet[1770]: I0430 03:21:16.380901 1770 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:21:16.380915 kubelet[1770]: I0430 03:21:16.380920 1770 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:21:16.381010 kubelet[1770]: I0430 03:21:16.380932 1770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:21:16.381010 kubelet[1770]: E0430 03:21:16.380999 1770 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:16.381051 kubelet[1770]: E0430 03:21:16.381037 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:16.384723 kubelet[1770]: I0430 03:21:16.384693 1770 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:21:16.385161 kubelet[1770]: I0430 03:21:16.385137 1770 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:21:16.385222 kubelet[1770]: W0430 03:21:16.385207 1770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:21:16.386590 kubelet[1770]: W0430 03:21:16.386533 1770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Apr 30 03:21:16.386636 kubelet[1770]: E0430 03:21:16.386603 1770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.31\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Apr 30 03:21:16.386660 kubelet[1770]: W0430 03:21:16.386537 1770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 30 03:21:16.386660 kubelet[1770]: E0430 03:21:16.386655 1770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Apr 30 03:21:16.387748 kubelet[1770]: I0430 03:21:16.387715 1770 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:21:16.387803 kubelet[1770]: I0430 03:21:16.387777 1770 server.go:1287] "Started kubelet" Apr 30 03:21:16.390792 kubelet[1770]: I0430 03:21:16.390270 1770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:21:16.390792 kubelet[1770]: I0430 03:21:16.390271 1770 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:21:16.390792 kubelet[1770]: I0430 03:21:16.390282 1770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:21:16.390792 kubelet[1770]: I0430 03:21:16.390698 1770 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:21:16.391488 kubelet[1770]: I0430 03:21:16.391344 1770 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:21:16.392299 kubelet[1770]: I0430 03:21:16.392097 1770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:21:16.392299 kubelet[1770]: E0430 03:21:16.392233 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:16.392299 kubelet[1770]: I0430 03:21:16.392276 1770 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:21:16.393709 kubelet[1770]: I0430 03:21:16.392517 1770 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:21:16.393709 kubelet[1770]: I0430 03:21:16.392794 1770 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:21:16.393709 kubelet[1770]: I0430 03:21:16.393702 1770 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:21:16.393942 kubelet[1770]: I0430 03:21:16.393832 1770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:21:16.396760 kubelet[1770]: E0430 03:21:16.393385 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.31.183afa8a0cdd53a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.31,UID:10.0.0.31,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.31,},FirstTimestamp:2025-04-30 03:21:16.387750817 +0000 UTC m=+0.588519536,LastTimestamp:2025-04-30 03:21:16.387750817 +0000 UTC m=+0.588519536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.31,}" Apr 30 03:21:16.396760 kubelet[1770]: I0430 03:21:16.395359 1770 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:21:16.399240 kubelet[1770]: E0430 03:21:16.399192 1770 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:21:16.401476 kubelet[1770]: E0430 03:21:16.400824 1770 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 30 03:21:16.469762 kubelet[1770]: W0430 03:21:16.468828 1770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Apr 30 03:21:16.469762 kubelet[1770]: E0430 03:21:16.468870 1770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Apr 30 03:21:16.469762 kubelet[1770]: E0430 03:21:16.468943 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.31.183afa8a0d8b9425 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.31,UID:10.0.0.31,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.31,},FirstTimestamp:2025-04-30 03:21:16.399170597 +0000 UTC m=+0.599939316,LastTimestamp:2025-04-30 03:21:16.399170597 +0000 UTC m=+0.599939316,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.31,}" Apr 30 03:21:16.471492 kubelet[1770]: I0430 03:21:16.471459 1770 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:21:16.471492 kubelet[1770]: I0430 03:21:16.471482 1770 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:21:16.471492 kubelet[1770]: I0430 03:21:16.471497 1770 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:21:16.493362 kubelet[1770]: E0430 03:21:16.493315 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:16.594206 kubelet[1770]: E0430 03:21:16.594029 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:16.667110 kubelet[1770]: E0430 03:21:16.667036 1770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.31\" not found" node="10.0.0.31" Apr 30 03:21:16.694456 kubelet[1770]: E0430 03:21:16.694371 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:16.794995 kubelet[1770]: E0430 03:21:16.794918 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:16.895760 kubelet[1770]: E0430 03:21:16.895640 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:16.996495 kubelet[1770]: E0430 03:21:16.996392 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:17.038412 kubelet[1770]: I0430 03:21:17.038371 1770 policy_none.go:49] "None policy: Start" Apr 30 03:21:17.038412 kubelet[1770]: I0430 03:21:17.038414 1770 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:21:17.038661 kubelet[1770]: I0430 03:21:17.038434 1770 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:21:17.057360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:21:17.076702 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:21:17.077720 kubelet[1770]: I0430 03:21:17.077666 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:21:17.079155 kubelet[1770]: I0430 03:21:17.079129 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:21:17.079580 kubelet[1770]: I0430 03:21:17.079242 1770 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:21:17.079580 kubelet[1770]: I0430 03:21:17.079276 1770 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:21:17.079580 kubelet[1770]: I0430 03:21:17.079287 1770 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:21:17.079580 kubelet[1770]: E0430 03:21:17.079425 1770 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:21:17.083219 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:21:17.096843 kubelet[1770]: E0430 03:21:17.096791 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:17.100228 kubelet[1770]: I0430 03:21:17.100188 1770 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:21:17.100508 kubelet[1770]: I0430 03:21:17.100489 1770 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:21:17.100596 kubelet[1770]: I0430 03:21:17.100507 1770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:21:17.100946 kubelet[1770]: I0430 03:21:17.100837 1770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:21:17.102046 kubelet[1770]: E0430 03:21:17.101974 1770 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:21:17.102046 kubelet[1770]: E0430 03:21:17.102013 1770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.31\" not found" Apr 30 03:21:17.202664 kubelet[1770]: I0430 03:21:17.202434 1770 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.31" Apr 30 03:21:17.207616 kubelet[1770]: I0430 03:21:17.207556 1770 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.31" Apr 30 03:21:17.207616 kubelet[1770]: E0430 03:21:17.207593 1770 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.31\": node \"10.0.0.31\" not found" Apr 30 03:21:17.213234 kubelet[1770]: E0430 03:21:17.213168 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Apr 30 03:21:17.315061 kubelet[1770]: I0430 03:21:17.315012 1770 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Apr 30 03:21:17.315344 containerd[1466]: time="2025-04-30T03:21:17.315283784Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:21:17.315947 kubelet[1770]: I0430 03:21:17.315493 1770 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Apr 30 03:21:17.334250 kubelet[1770]: I0430 03:21:17.334165 1770 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 30 03:21:17.334489 kubelet[1770]: W0430 03:21:17.334443 1770 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 03:21:17.334547 kubelet[1770]: W0430 03:21:17.334489 1770 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 03:21:17.334750 kubelet[1770]: E0430 03:21:17.334508 1770 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.20:6443/api/v1/namespaces/default/events/10.0.0.31.183afa8a11cab6b3\": read tcp 10.0.0.31:42846->10.0.0.20:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.31.183afa8a11cab6b3 default 693 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.31,UID:10.0.0.31,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.31 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.31,},FirstTimestamp:2025-04-30 03:21:16 +0000 UTC,LastTimestamp:2025-04-30 03:21:17.202390017 +0000 UTC m=+1.403158736,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.31,}" Apr 30 03:21:17.381798 kubelet[1770]: I0430 03:21:17.381704 1770 apiserver.go:52] "Watching apiserver" Apr 30 03:21:17.381971 kubelet[1770]: E0430 03:21:17.381831 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:17.404142 systemd[1]: Created slice kubepods-besteffort-pod0c13b6a1_52fd_4cfa_94c6_2de5ba54f9f0.slice - libcontainer container kubepods-besteffort-pod0c13b6a1_52fd_4cfa_94c6_2de5ba54f9f0.slice. Apr 30 03:21:17.405251 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 30 03:21:17.409181 sshd[1639]: pam_unix(sshd:session): session closed for user core Apr 30 03:21:17.413032 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:40376.service: Deactivated successfully. Apr 30 03:21:17.415814 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:21:17.418443 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:21:17.421262 systemd[1]: Created slice kubepods-besteffort-podd176cd5f_2562_4c26_93f6_67799a61f96e.slice - libcontainer container kubepods-besteffort-podd176cd5f_2562_4c26_93f6_67799a61f96e.slice. Apr 30 03:21:17.421719 systemd-logind[1452]: Removed session 7. Apr 30 03:21:17.442951 kubelet[1770]: E0430 03:21:17.442886 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:17.493290 kubelet[1770]: I0430 03:21:17.493094 1770 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:21:17.499239 kubelet[1770]: I0430 03:21:17.499161 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0-lib-modules\") pod \"kube-proxy-cvqrz\" (UID: \"0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0\") " pod="kube-system/kube-proxy-cvqrz" Apr 30 03:21:17.499239 kubelet[1770]: I0430 03:21:17.499221 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d176cd5f-2562-4c26-93f6-67799a61f96e-tigera-ca-bundle\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499239 kubelet[1770]: I0430 03:21:17.499248 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-bin-dir\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499421 kubelet[1770]: I0430 03:21:17.499268 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-flexvol-driver-host\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499421 kubelet[1770]: I0430 03:21:17.499331 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0-xtables-lock\") pod \"kube-proxy-cvqrz\" (UID: \"0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0\") " pod="kube-system/kube-proxy-cvqrz" Apr 30 03:21:17.499421 kubelet[1770]: I0430 03:21:17.499385 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6562acde-1c3c-4d40-a30d-35106d6fab16-varrun\") pod \"csi-node-driver-z88pf\" (UID: \"6562acde-1c3c-4d40-a30d-35106d6fab16\") " pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:17.499421 kubelet[1770]: I0430 03:21:17.499409 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d176cd5f-2562-4c26-93f6-67799a61f96e-node-certs\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499563 kubelet[1770]: I0430 03:21:17.499426 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-run-calico\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499563 kubelet[1770]: I0430 03:21:17.499485 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-log-dir\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499563 kubelet[1770]: I0430 03:21:17.499533 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmgb\" (UniqueName: \"kubernetes.io/projected/d176cd5f-2562-4c26-93f6-67799a61f96e-kube-api-access-qvmgb\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499563 kubelet[1770]: I0430 03:21:17.499556 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj68s\" (UniqueName: \"kubernetes.io/projected/0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0-kube-api-access-zj68s\") pod \"kube-proxy-cvqrz\" (UID: \"0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0\") " pod="kube-system/kube-proxy-cvqrz" Apr 30 03:21:17.499695 kubelet[1770]: I0430 03:21:17.499575 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6562acde-1c3c-4d40-a30d-35106d6fab16-socket-dir\") pod \"csi-node-driver-z88pf\" (UID: \"6562acde-1c3c-4d40-a30d-35106d6fab16\") " pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:17.499695 kubelet[1770]: I0430 03:21:17.499591 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddch5\" (UniqueName: \"kubernetes.io/projected/6562acde-1c3c-4d40-a30d-35106d6fab16-kube-api-access-ddch5\") pod \"csi-node-driver-z88pf\" (UID: \"6562acde-1c3c-4d40-a30d-35106d6fab16\") " pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:17.499695 kubelet[1770]: I0430 03:21:17.499614 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-policysync\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499695 kubelet[1770]: I0430 03:21:17.499635 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-net-dir\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499846 kubelet[1770]: I0430 03:21:17.499706 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-lib-calico\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499846 kubelet[1770]: I0430 03:21:17.499794 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0-kube-proxy\") pod \"kube-proxy-cvqrz\" (UID: \"0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0\") " pod="kube-system/kube-proxy-cvqrz" Apr 30 03:21:17.499846 kubelet[1770]: I0430 03:21:17.499816 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6562acde-1c3c-4d40-a30d-35106d6fab16-kubelet-dir\") pod \"csi-node-driver-z88pf\" (UID: \"6562acde-1c3c-4d40-a30d-35106d6fab16\") " pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:17.499846 kubelet[1770]: I0430 03:21:17.499832 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6562acde-1c3c-4d40-a30d-35106d6fab16-registration-dir\") pod \"csi-node-driver-z88pf\" (UID: \"6562acde-1c3c-4d40-a30d-35106d6fab16\") " pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:17.499979 kubelet[1770]: I0430 03:21:17.499858 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-lib-modules\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.499979 kubelet[1770]: I0430 03:21:17.499873 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-xtables-lock\") pod \"calico-node-fh28g\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " pod="calico-system/calico-node-fh28g" Apr 30 03:21:17.602425 kubelet[1770]: E0430 03:21:17.602389 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:21:17.602425 kubelet[1770]: W0430 03:21:17.602417 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:21:17.602577 kubelet[1770]: E0430 03:21:17.602437 1770 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:21:17.605573 kubelet[1770]: E0430 03:21:17.605535 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:21:17.605573 kubelet[1770]: W0430 03:21:17.605561 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:21:17.605803 kubelet[1770]: E0430 03:21:17.605578 1770 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:21:17.623075 kubelet[1770]: E0430 03:21:17.623026 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:21:17.623075 kubelet[1770]: W0430 03:21:17.623058 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:21:17.623075 kubelet[1770]: E0430 03:21:17.623081 1770 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:21:17.626006 kubelet[1770]: E0430 03:21:17.625895 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:21:17.626006 kubelet[1770]: W0430 03:21:17.625928 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:21:17.626006 kubelet[1770]: E0430 03:21:17.625954 1770 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:21:17.626427 kubelet[1770]: E0430 03:21:17.626398 1770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:21:17.626427 kubelet[1770]: W0430 03:21:17.626423 1770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:21:17.626539 kubelet[1770]: E0430 03:21:17.626443 1770 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:21:17.714132 kubelet[1770]: E0430 03:21:17.714058 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:17.715119 containerd[1466]: time="2025-04-30T03:21:17.715056483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvqrz,Uid:0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0,Namespace:kube-system,Attempt:0,}" Apr 30 03:21:17.724311 kubelet[1770]: E0430 03:21:17.724243 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:17.726189 containerd[1466]: time="2025-04-30T03:21:17.725161357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fh28g,Uid:d176cd5f-2562-4c26-93f6-67799a61f96e,Namespace:calico-system,Attempt:0,}" Apr 30 03:21:18.382090 kubelet[1770]: E0430 03:21:18.382048 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:19.080265 kubelet[1770]: E0430 03:21:19.080185 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:19.383116 kubelet[1770]: E0430 03:21:19.383058 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:19.790000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494401222.mount: Deactivated successfully. Apr 30 03:21:19.799473 containerd[1466]: time="2025-04-30T03:21:19.799389776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:21:19.800694 containerd[1466]: time="2025-04-30T03:21:19.800654128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:21:19.801720 containerd[1466]: time="2025-04-30T03:21:19.801660455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:21:19.802817 containerd[1466]: time="2025-04-30T03:21:19.802765358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:21:19.803744 containerd[1466]: time="2025-04-30T03:21:19.803690914Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:21:19.807348 containerd[1466]: time="2025-04-30T03:21:19.807296888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:21:19.808168 containerd[1466]: time="2025-04-30T03:21:19.808116185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.092912145s" Apr 30 03:21:19.810796 containerd[1466]: time="2025-04-30T03:21:19.810722944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.085473382s" Apr 30 03:21:19.996029 containerd[1466]: time="2025-04-30T03:21:19.995799825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:21:19.996029 containerd[1466]: time="2025-04-30T03:21:19.995877241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:21:19.996029 containerd[1466]: time="2025-04-30T03:21:19.995893561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:21:19.996223 containerd[1466]: time="2025-04-30T03:21:19.995987768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:21:20.024375 containerd[1466]: time="2025-04-30T03:21:20.024173747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:21:20.025575 containerd[1466]: time="2025-04-30T03:21:20.024379713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:21:20.025575 containerd[1466]: time="2025-04-30T03:21:20.025524801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:21:20.025761 containerd[1466]: time="2025-04-30T03:21:20.025613928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:21:20.107012 systemd[1]: Started cri-containerd-2a054926a382ea6296d5317d9537c065e9e7f85a58ea873381cb841e5e3dd9e3.scope - libcontainer container 2a054926a382ea6296d5317d9537c065e9e7f85a58ea873381cb841e5e3dd9e3. Apr 30 03:21:20.110121 systemd[1]: Started cri-containerd-af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82.scope - libcontainer container af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82. Apr 30 03:21:20.150364 containerd[1466]: time="2025-04-30T03:21:20.150311916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fh28g,Uid:d176cd5f-2562-4c26-93f6-67799a61f96e,Namespace:calico-system,Attempt:0,} returns sandbox id \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\"" Apr 30 03:21:20.151862 kubelet[1770]: E0430 03:21:20.151817 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:20.152718 containerd[1466]: time="2025-04-30T03:21:20.152686660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvqrz,Uid:0c13b6a1-52fd-4cfa-94c6-2de5ba54f9f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a054926a382ea6296d5317d9537c065e9e7f85a58ea873381cb841e5e3dd9e3\"" Apr 30 03:21:20.153348 kubelet[1770]: E0430 03:21:20.153283 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:20.153416 containerd[1466]: time="2025-04-30T03:21:20.153391402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:21:20.384003 kubelet[1770]: E0430 03:21:20.383945 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:21.083102 kubelet[1770]: E0430 03:21:21.083045 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:21.384659 kubelet[1770]: E0430 03:21:21.384577 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:21.554245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221542633.mount: Deactivated successfully. Apr 30 03:21:21.631223 containerd[1466]: time="2025-04-30T03:21:21.631155195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:21.634577 containerd[1466]: time="2025-04-30T03:21:21.634439926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6859697" Apr 30 03:21:21.634840 containerd[1466]: time="2025-04-30T03:21:21.634606899Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:21.638225 containerd[1466]: time="2025-04-30T03:21:21.638174451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:21.638769 containerd[1466]: time="2025-04-30T03:21:21.638701209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.485270944s" Apr 30 03:21:21.638769 containerd[1466]: time="2025-04-30T03:21:21.638755861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:21:21.640213 containerd[1466]: time="2025-04-30T03:21:21.640169122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 03:21:21.641960 containerd[1466]: time="2025-04-30T03:21:21.641771107Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:21:21.663397 containerd[1466]: time="2025-04-30T03:21:21.663320037Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd\"" Apr 30 03:21:21.664076 containerd[1466]: time="2025-04-30T03:21:21.664037533Z" level=info msg="StartContainer for \"2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd\"" Apr 30 03:21:21.707909 systemd[1]: Started cri-containerd-2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd.scope - libcontainer container 2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd. Apr 30 03:21:21.784769 systemd[1]: cri-containerd-2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd.scope: Deactivated successfully. Apr 30 03:21:21.792137 containerd[1466]: time="2025-04-30T03:21:21.792098248Z" level=info msg="StartContainer for \"2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd\" returns successfully" Apr 30 03:21:21.814715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd-rootfs.mount: Deactivated successfully. Apr 30 03:21:21.888441 containerd[1466]: time="2025-04-30T03:21:21.888144904Z" level=info msg="shim disconnected" id=2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd namespace=k8s.io Apr 30 03:21:21.888441 containerd[1466]: time="2025-04-30T03:21:21.888210808Z" level=warning msg="cleaning up after shim disconnected" id=2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd namespace=k8s.io Apr 30 03:21:21.888441 containerd[1466]: time="2025-04-30T03:21:21.888219494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:21:22.093257 kubelet[1770]: E0430 03:21:22.093069 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:22.385258 kubelet[1770]: E0430 03:21:22.384923 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:22.716794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593029921.mount: Deactivated successfully. Apr 30 03:21:23.080362 kubelet[1770]: E0430 03:21:23.080166 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:23.385781 kubelet[1770]: E0430 03:21:23.385705 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:23.798383 containerd[1466]: time="2025-04-30T03:21:23.798202266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:23.799187 containerd[1466]: time="2025-04-30T03:21:23.799107874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 03:21:23.800773 containerd[1466]: time="2025-04-30T03:21:23.800661007Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:23.803551 containerd[1466]: time="2025-04-30T03:21:23.803502537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:23.804495 containerd[1466]: time="2025-04-30T03:21:23.804422212Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.164210871s" Apr 30 03:21:23.804495 containerd[1466]: time="2025-04-30T03:21:23.804477516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 03:21:23.806022 containerd[1466]: time="2025-04-30T03:21:23.805983150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:21:23.807399 containerd[1466]: time="2025-04-30T03:21:23.807329516Z" level=info msg="CreateContainer within sandbox \"2a054926a382ea6296d5317d9537c065e9e7f85a58ea873381cb841e5e3dd9e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:21:23.911133 containerd[1466]: time="2025-04-30T03:21:23.911045287Z" level=info msg="CreateContainer within sandbox \"2a054926a382ea6296d5317d9537c065e9e7f85a58ea873381cb841e5e3dd9e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d1df036f5a1b87cbded16493fa3737c3eee10151da9b449eca9b2edcc1ec46e\"" Apr 30 03:21:23.911905 containerd[1466]: time="2025-04-30T03:21:23.911864393Z" level=info msg="StartContainer for \"3d1df036f5a1b87cbded16493fa3737c3eee10151da9b449eca9b2edcc1ec46e\"" Apr 30 03:21:23.959906 systemd[1]: Started cri-containerd-3d1df036f5a1b87cbded16493fa3737c3eee10151da9b449eca9b2edcc1ec46e.scope - libcontainer container 3d1df036f5a1b87cbded16493fa3737c3eee10151da9b449eca9b2edcc1ec46e. Apr 30 03:21:23.999237 containerd[1466]: time="2025-04-30T03:21:23.999186645Z" level=info msg="StartContainer for \"3d1df036f5a1b87cbded16493fa3737c3eee10151da9b449eca9b2edcc1ec46e\" returns successfully" Apr 30 03:21:24.099116 kubelet[1770]: E0430 03:21:24.098963 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:24.109251 kubelet[1770]: I0430 03:21:24.109160 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvqrz" podStartSLOduration=3.457147732 podStartE2EDuration="7.109138403s" podCreationTimestamp="2025-04-30 03:21:17 +0000 UTC" firstStartedPulling="2025-04-30 03:21:20.15363537 +0000 UTC m=+4.354404079" lastFinishedPulling="2025-04-30 03:21:23.80562603 +0000 UTC m=+8.006394750" observedRunningTime="2025-04-30 03:21:24.108746277 +0000 UTC m=+8.309515016" watchObservedRunningTime="2025-04-30 03:21:24.109138403 +0000 UTC m=+8.309907122" Apr 30 03:21:24.386660 kubelet[1770]: E0430 03:21:24.386615 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:25.082535 kubelet[1770]: E0430 03:21:25.082457 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:25.100550 kubelet[1770]: E0430 03:21:25.100497 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:25.386867 kubelet[1770]: E0430 03:21:25.386820 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:26.387179 kubelet[1770]: E0430 03:21:26.387133 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:27.080136 kubelet[1770]: E0430 03:21:27.079950 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:27.388216 kubelet[1770]: E0430 03:21:27.388173 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:28.388704 kubelet[1770]: E0430 03:21:28.388623 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:29.080472 kubelet[1770]: E0430 03:21:29.080388 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:29.389534 kubelet[1770]: E0430 03:21:29.389407 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:29.427550 containerd[1466]: time="2025-04-30T03:21:29.427459142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:29.428894 containerd[1466]: time="2025-04-30T03:21:29.428830785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:21:29.432150 containerd[1466]: time="2025-04-30T03:21:29.432081703Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:29.434681 containerd[1466]: time="2025-04-30T03:21:29.434617249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:29.435562 containerd[1466]: time="2025-04-30T03:21:29.435524130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.629486287s" Apr 30 03:21:29.435625 containerd[1466]: time="2025-04-30T03:21:29.435564706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:21:29.437699 containerd[1466]: time="2025-04-30T03:21:29.437651380Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:21:29.456750 containerd[1466]: time="2025-04-30T03:21:29.456670585Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f\"" Apr 30 03:21:29.457435 containerd[1466]: time="2025-04-30T03:21:29.457379655Z" level=info msg="StartContainer for \"56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f\"" Apr 30 03:21:29.508933 systemd[1]: Started cri-containerd-56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f.scope - libcontainer container 56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f. Apr 30 03:21:29.550749 containerd[1466]: time="2025-04-30T03:21:29.550692082Z" level=info msg="StartContainer for \"56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f\" returns successfully" Apr 30 03:21:30.109342 kubelet[1770]: E0430 03:21:30.109277 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:30.389671 kubelet[1770]: E0430 03:21:30.389600 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:31.083046 kubelet[1770]: E0430 03:21:31.082994 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:31.111524 kubelet[1770]: E0430 03:21:31.111443 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:31.287865 systemd[1]: cri-containerd-56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f.scope: Deactivated successfully. Apr 30 03:21:31.288694 systemd[1]: cri-containerd-56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f.scope: Consumed 1.368s CPU time. Apr 30 03:21:31.295091 kubelet[1770]: I0430 03:21:31.295050 1770 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 03:21:31.314749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f-rootfs.mount: Deactivated successfully. Apr 30 03:21:31.390585 kubelet[1770]: E0430 03:21:31.390518 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:32.331780 containerd[1466]: time="2025-04-30T03:21:32.331653691Z" level=info msg="shim disconnected" id=56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f namespace=k8s.io Apr 30 03:21:32.331780 containerd[1466]: time="2025-04-30T03:21:32.331775029Z" level=warning msg="cleaning up after shim disconnected" id=56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f namespace=k8s.io Apr 30 03:21:32.331780 containerd[1466]: time="2025-04-30T03:21:32.331789376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:21:32.390992 kubelet[1770]: E0430 03:21:32.390913 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:33.086000 systemd[1]: Created slice kubepods-besteffort-pod6562acde_1c3c_4d40_a30d_35106d6fab16.slice - libcontainer container kubepods-besteffort-pod6562acde_1c3c_4d40_a30d_35106d6fab16.slice. Apr 30 03:21:33.088236 containerd[1466]: time="2025-04-30T03:21:33.088191039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z88pf,Uid:6562acde-1c3c-4d40-a30d-35106d6fab16,Namespace:calico-system,Attempt:0,}" Apr 30 03:21:33.116674 kubelet[1770]: E0430 03:21:33.116638 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:33.117690 containerd[1466]: time="2025-04-30T03:21:33.117376112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:21:33.231269 containerd[1466]: time="2025-04-30T03:21:33.231206626Z" level=error msg="Failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:33.231688 containerd[1466]: time="2025-04-30T03:21:33.231653474Z" level=error msg="encountered an error cleaning up failed sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:33.231745 containerd[1466]: time="2025-04-30T03:21:33.231708286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z88pf,Uid:6562acde-1c3c-4d40-a30d-35106d6fab16,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:33.232041 kubelet[1770]: E0430 03:21:33.231996 1770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:33.232489 kubelet[1770]: E0430 03:21:33.232152 1770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:33.232489 kubelet[1770]: E0430 03:21:33.232188 1770 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z88pf" Apr 30 03:21:33.232489 kubelet[1770]: E0430 03:21:33.232241 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z88pf_calico-system(6562acde-1c3c-4d40-a30d-35106d6fab16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z88pf_calico-system(6562acde-1c3c-4d40-a30d-35106d6fab16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:33.233060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5-shm.mount: Deactivated successfully. Apr 30 03:21:33.391526 kubelet[1770]: E0430 03:21:33.391469 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:34.119507 kubelet[1770]: I0430 03:21:34.119461 1770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:21:34.120332 containerd[1466]: time="2025-04-30T03:21:34.120272559Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:21:34.120824 containerd[1466]: time="2025-04-30T03:21:34.120485358Z" level=info msg="Ensure that sandbox 262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5 in task-service has been cleanup successfully" Apr 30 03:21:34.152402 containerd[1466]: time="2025-04-30T03:21:34.152315743Z" level=error msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" failed" error="failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:34.152713 kubelet[1770]: E0430 03:21:34.152642 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:21:34.152794 kubelet[1770]: E0430 03:21:34.152745 1770 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5"} Apr 30 03:21:34.152838 kubelet[1770]: E0430 03:21:34.152810 1770 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6562acde-1c3c-4d40-a30d-35106d6fab16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:21:34.152900 kubelet[1770]: E0430 03:21:34.152841 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6562acde-1c3c-4d40-a30d-35106d6fab16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:34.392178 kubelet[1770]: E0430 03:21:34.392113 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:34.820832 systemd[1]: Created slice kubepods-besteffort-poda59f26ad_a021_4259_8f26_9e83377cfde3.slice - libcontainer container kubepods-besteffort-poda59f26ad_a021_4259_8f26_9e83377cfde3.slice. Apr 30 03:21:34.902061 kubelet[1770]: I0430 03:21:34.901911 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhwtg\" (UniqueName: \"kubernetes.io/projected/a59f26ad-a021-4259-8f26-9e83377cfde3-kube-api-access-nhwtg\") pod \"nginx-deployment-7fcdb87857-sqxb2\" (UID: \"a59f26ad-a021-4259-8f26-9e83377cfde3\") " pod="default/nginx-deployment-7fcdb87857-sqxb2" Apr 30 03:21:35.124369 containerd[1466]: time="2025-04-30T03:21:35.124244624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-sqxb2,Uid:a59f26ad-a021-4259-8f26-9e83377cfde3,Namespace:default,Attempt:0,}" Apr 30 03:21:35.297058 containerd[1466]: time="2025-04-30T03:21:35.296970518Z" level=error msg="Failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:35.299145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631-shm.mount: Deactivated successfully. Apr 30 03:21:35.300396 containerd[1466]: time="2025-04-30T03:21:35.300341992Z" level=error msg="encountered an error cleaning up failed sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:35.300676 containerd[1466]: time="2025-04-30T03:21:35.300411142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-sqxb2,Uid:a59f26ad-a021-4259-8f26-9e83377cfde3,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:35.300769 kubelet[1770]: E0430 03:21:35.300666 1770 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:35.300811 kubelet[1770]: E0430 03:21:35.300764 1770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-sqxb2" Apr 30 03:21:35.300811 kubelet[1770]: E0430 03:21:35.300789 1770 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-sqxb2" Apr 30 03:21:35.300860 kubelet[1770]: E0430 03:21:35.300827 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-sqxb2_default(a59f26ad-a021-4259-8f26-9e83377cfde3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-sqxb2_default(a59f26ad-a021-4259-8f26-9e83377cfde3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-sqxb2" podUID="a59f26ad-a021-4259-8f26-9e83377cfde3" Apr 30 03:21:35.393170 kubelet[1770]: E0430 03:21:35.393096 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:36.123590 kubelet[1770]: I0430 03:21:36.123550 1770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:21:36.124214 containerd[1466]: time="2025-04-30T03:21:36.124176951Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:21:36.124398 containerd[1466]: time="2025-04-30T03:21:36.124364463Z" level=info msg="Ensure that sandbox f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631 in task-service has been cleanup successfully" Apr 30 03:21:36.162688 containerd[1466]: time="2025-04-30T03:21:36.162611042Z" level=error msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" failed" error="failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:36.162991 kubelet[1770]: E0430 03:21:36.162937 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:21:36.163076 kubelet[1770]: E0430 03:21:36.163004 1770 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631"} Apr 30 03:21:36.163076 kubelet[1770]: E0430 03:21:36.163046 1770 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a59f26ad-a021-4259-8f26-9e83377cfde3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:21:36.163076 kubelet[1770]: E0430 03:21:36.163069 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a59f26ad-a021-4259-8f26-9e83377cfde3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-sqxb2" podUID="a59f26ad-a021-4259-8f26-9e83377cfde3" Apr 30 03:21:36.381119 kubelet[1770]: E0430 03:21:36.381058 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:36.393577 kubelet[1770]: E0430 03:21:36.393514 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:37.394199 kubelet[1770]: E0430 03:21:37.393994 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:38.395015 kubelet[1770]: E0430 03:21:38.394939 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:38.997549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266063501.mount: Deactivated successfully. Apr 30 03:21:39.395898 kubelet[1770]: E0430 03:21:39.395841 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:40.147633 containerd[1466]: time="2025-04-30T03:21:40.147576636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:40.169835 containerd[1466]: time="2025-04-30T03:21:40.169762795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:21:40.188181 containerd[1466]: time="2025-04-30T03:21:40.188099583Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:40.206145 containerd[1466]: time="2025-04-30T03:21:40.206079832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:21:40.206726 containerd[1466]: time="2025-04-30T03:21:40.206687070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.089268808s" Apr 30 03:21:40.206801 containerd[1466]: time="2025-04-30T03:21:40.206722748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:21:40.216888 containerd[1466]: time="2025-04-30T03:21:40.216821230Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:21:40.396427 kubelet[1770]: E0430 03:21:40.396330 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:40.638282 containerd[1466]: time="2025-04-30T03:21:40.638208259Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\"" Apr 30 03:21:40.638833 containerd[1466]: time="2025-04-30T03:21:40.638799607Z" level=info msg="StartContainer for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\"" Apr 30 03:21:40.689193 systemd[1]: Started cri-containerd-6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883.scope - libcontainer container 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883. Apr 30 03:21:40.771579 containerd[1466]: time="2025-04-30T03:21:40.771480951Z" level=info msg="StartContainer for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" returns successfully" Apr 30 03:21:40.914193 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:21:40.914346 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:21:40.939685 systemd[1]: cri-containerd-6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883.scope: Deactivated successfully. Apr 30 03:21:41.133128 kubelet[1770]: E0430 03:21:41.133089 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:41.213943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883-rootfs.mount: Deactivated successfully. Apr 30 03:21:41.244517 kubelet[1770]: I0430 03:21:41.244413 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fh28g" podStartSLOduration=4.189662737 podStartE2EDuration="24.244384247s" podCreationTimestamp="2025-04-30 03:21:17 +0000 UTC" firstStartedPulling="2025-04-30 03:21:20.152867419 +0000 UTC m=+4.353636128" lastFinishedPulling="2025-04-30 03:21:40.207588899 +0000 UTC m=+24.408357638" observedRunningTime="2025-04-30 03:21:41.244098223 +0000 UTC m=+25.444866952" watchObservedRunningTime="2025-04-30 03:21:41.244384247 +0000 UTC m=+25.445152966" Apr 30 03:21:41.396684 kubelet[1770]: E0430 03:21:41.396574 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:42.120116 containerd[1466]: time="2025-04-30T03:21:42.119963518Z" level=info msg="shim disconnected" id=6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 namespace=k8s.io Apr 30 03:21:42.120116 containerd[1466]: time="2025-04-30T03:21:42.120062475Z" level=warning msg="cleaning up after shim disconnected" id=6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 namespace=k8s.io Apr 30 03:21:42.120116 containerd[1466]: time="2025-04-30T03:21:42.120075932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:21:42.121094 containerd[1466]: time="2025-04-30T03:21:42.121030688Z" level=error msg="ExecSync for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"2a6121decfe5cd8a36b96be583007a42dd36420a39bc8f6d20d2744826f8046f\": task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" Apr 30 03:21:42.121345 kubelet[1770]: E0430 03:21:42.121288 1770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"2a6121decfe5cd8a36b96be583007a42dd36420a39bc8f6d20d2744826f8046f\": task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:21:42.122518 containerd[1466]: time="2025-04-30T03:21:42.122444007Z" level=error msg="ExecSync for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" Apr 30 03:21:42.122789 kubelet[1770]: E0430 03:21:42.122690 1770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:21:42.123638 containerd[1466]: time="2025-04-30T03:21:42.123605547Z" level=error msg="ExecSync for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" Apr 30 03:21:42.123810 kubelet[1770]: E0430 03:21:42.123763 1770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:21:42.134956 kubelet[1770]: E0430 03:21:42.134785 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:42.135845 containerd[1466]: time="2025-04-30T03:21:42.135771596Z" level=error msg="ExecSync for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" Apr 30 03:21:42.136259 kubelet[1770]: E0430 03:21:42.135933 1770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883 not found: not found" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:21:42.140391 containerd[1466]: time="2025-04-30T03:21:42.140324038Z" level=error msg="ExecSync for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Apr 30 03:21:42.140717 kubelet[1770]: E0430 03:21:42.140633 1770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:21:42.141255 containerd[1466]: time="2025-04-30T03:21:42.141196228Z" level=error msg="ExecSync for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Apr 30 03:21:42.141448 kubelet[1770]: E0430 03:21:42.141398 1770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:21:42.397751 kubelet[1770]: E0430 03:21:42.397655 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:43.138373 kubelet[1770]: I0430 03:21:43.138333 1770 scope.go:117] "RemoveContainer" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" Apr 30 03:21:43.138559 kubelet[1770]: E0430 03:21:43.138410 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:43.140594 containerd[1466]: time="2025-04-30T03:21:43.140540665Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Apr 30 03:21:43.166960 containerd[1466]: time="2025-04-30T03:21:43.166883144Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47\"" Apr 30 03:21:43.167743 containerd[1466]: time="2025-04-30T03:21:43.167677133Z" level=info msg="StartContainer for \"8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47\"" Apr 30 03:21:43.207018 systemd[1]: Started cri-containerd-8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47.scope - libcontainer container 8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47. Apr 30 03:21:43.244271 containerd[1466]: time="2025-04-30T03:21:43.244221389Z" level=info msg="StartContainer for \"8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47\" returns successfully" Apr 30 03:21:43.304002 systemd[1]: cri-containerd-8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47.scope: Deactivated successfully. Apr 30 03:21:43.326899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47-rootfs.mount: Deactivated successfully. Apr 30 03:21:43.333286 containerd[1466]: time="2025-04-30T03:21:43.333215213Z" level=info msg="shim disconnected" id=8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47 namespace=k8s.io Apr 30 03:21:43.333286 containerd[1466]: time="2025-04-30T03:21:43.333284103Z" level=warning msg="cleaning up after shim disconnected" id=8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47 namespace=k8s.io Apr 30 03:21:43.333448 containerd[1466]: time="2025-04-30T03:21:43.333298521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:21:43.398916 kubelet[1770]: E0430 03:21:43.398694 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:44.142876 kubelet[1770]: I0430 03:21:44.142833 1770 scope.go:117] "RemoveContainer" containerID="6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883" Apr 30 03:21:44.143276 kubelet[1770]: I0430 03:21:44.143247 1770 scope.go:117] "RemoveContainer" containerID="8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47" Apr 30 03:21:44.143381 kubelet[1770]: E0430 03:21:44.143340 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:44.143594 kubelet[1770]: E0430 03:21:44.143525 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-fh28g_calico-system(d176cd5f-2562-4c26-93f6-67799a61f96e)\"" pod="calico-system/calico-node-fh28g" podUID="d176cd5f-2562-4c26-93f6-67799a61f96e" Apr 30 03:21:44.144281 containerd[1466]: time="2025-04-30T03:21:44.144243951Z" level=info msg="RemoveContainer for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\"" Apr 30 03:21:44.219685 containerd[1466]: time="2025-04-30T03:21:44.219609989Z" level=info msg="RemoveContainer for \"6ad2f40eddc47b16f535c06e7ab3b1fb838d06163f5bd2e605ba9464e35d9883\" returns successfully" Apr 30 03:21:44.399438 kubelet[1770]: E0430 03:21:44.399246 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:45.400031 kubelet[1770]: E0430 03:21:45.399913 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:46.400590 kubelet[1770]: E0430 03:21:46.400543 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:47.082835 containerd[1466]: time="2025-04-30T03:21:47.082790695Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:21:47.112929 containerd[1466]: time="2025-04-30T03:21:47.112866718Z" level=error msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" failed" error="failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:47.113080 kubelet[1770]: E0430 03:21:47.113028 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:21:47.113121 kubelet[1770]: E0430 03:21:47.113083 1770 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5"} Apr 30 03:21:47.113146 kubelet[1770]: E0430 03:21:47.113123 1770 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6562acde-1c3c-4d40-a30d-35106d6fab16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:21:47.113215 kubelet[1770]: E0430 03:21:47.113147 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6562acde-1c3c-4d40-a30d-35106d6fab16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:47.401543 kubelet[1770]: E0430 03:21:47.401487 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:48.402307 kubelet[1770]: E0430 03:21:48.402223 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:49.402836 kubelet[1770]: E0430 03:21:49.402648 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:50.403101 kubelet[1770]: E0430 03:21:50.402965 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:51.081497 containerd[1466]: time="2025-04-30T03:21:51.081381182Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:21:51.123671 containerd[1466]: time="2025-04-30T03:21:51.123593698Z" level=error msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" failed" error="failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:51.123977 kubelet[1770]: E0430 03:21:51.123914 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:21:51.124027 kubelet[1770]: E0430 03:21:51.123995 1770 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631"} Apr 30 03:21:51.124103 kubelet[1770]: E0430 03:21:51.124062 1770 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a59f26ad-a021-4259-8f26-9e83377cfde3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:21:51.124204 kubelet[1770]: E0430 03:21:51.124119 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a59f26ad-a021-4259-8f26-9e83377cfde3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-sqxb2" podUID="a59f26ad-a021-4259-8f26-9e83377cfde3" Apr 30 03:21:51.403459 kubelet[1770]: E0430 03:21:51.403394 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:52.404377 kubelet[1770]: E0430 03:21:52.404282 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:53.404843 kubelet[1770]: E0430 03:21:53.404701 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:54.146456 update_engine[1458]: I20250430 03:21:54.146256 1458 update_attempter.cc:509] Updating boot flags... Apr 30 03:21:54.183159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2539) Apr 30 03:21:54.226796 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2542) Apr 30 03:21:54.268372 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2542) Apr 30 03:21:54.405258 kubelet[1770]: E0430 03:21:54.404972 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:55.405886 kubelet[1770]: E0430 03:21:55.405788 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:56.382053 kubelet[1770]: E0430 03:21:56.381966 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:56.406724 kubelet[1770]: E0430 03:21:56.406571 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:57.407784 kubelet[1770]: E0430 03:21:57.407690 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:58.080909 containerd[1466]: time="2025-04-30T03:21:58.080840560Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:21:58.123405 containerd[1466]: time="2025-04-30T03:21:58.123334405Z" level=error msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" failed" error="failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:21:58.123685 kubelet[1770]: E0430 03:21:58.123615 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:21:58.123803 kubelet[1770]: E0430 03:21:58.123695 1770 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5"} Apr 30 03:21:58.123803 kubelet[1770]: E0430 03:21:58.123778 1770 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6562acde-1c3c-4d40-a30d-35106d6fab16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:21:58.123981 kubelet[1770]: E0430 03:21:58.123811 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6562acde-1c3c-4d40-a30d-35106d6fab16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z88pf" podUID="6562acde-1c3c-4d40-a30d-35106d6fab16" Apr 30 03:21:58.408537 kubelet[1770]: E0430 03:21:58.408470 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:59.081117 kubelet[1770]: I0430 03:21:59.080952 1770 scope.go:117] "RemoveContainer" containerID="8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47" Apr 30 03:21:59.081117 kubelet[1770]: E0430 03:21:59.081133 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:21:59.084880 containerd[1466]: time="2025-04-30T03:21:59.084799546Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Apr 30 03:21:59.137927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968091462.mount: Deactivated successfully. Apr 30 03:21:59.153302 containerd[1466]: time="2025-04-30T03:21:59.153200584Z" level=info msg="CreateContainer within sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5\"" Apr 30 03:21:59.154172 containerd[1466]: time="2025-04-30T03:21:59.154123634Z" level=info msg="StartContainer for \"f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5\"" Apr 30 03:21:59.198212 systemd[1]: Started cri-containerd-f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5.scope - libcontainer container f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5. Apr 30 03:21:59.245058 containerd[1466]: time="2025-04-30T03:21:59.245002256Z" level=info msg="StartContainer for \"f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5\" returns successfully" Apr 30 03:21:59.388789 systemd[1]: cri-containerd-f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5.scope: Deactivated successfully. Apr 30 03:21:59.409244 kubelet[1770]: E0430 03:21:59.409195 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:21:59.414977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5-rootfs.mount: Deactivated successfully. Apr 30 03:21:59.561342 containerd[1466]: time="2025-04-30T03:21:59.561228460Z" level=info msg="shim disconnected" id=f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5 namespace=k8s.io Apr 30 03:21:59.561342 containerd[1466]: time="2025-04-30T03:21:59.561326465Z" level=warning msg="cleaning up after shim disconnected" id=f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5 namespace=k8s.io Apr 30 03:21:59.561342 containerd[1466]: time="2025-04-30T03:21:59.561337085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:22:00.189304 kubelet[1770]: I0430 03:22:00.189233 1770 scope.go:117] "RemoveContainer" containerID="8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47" Apr 30 03:22:00.189790 kubelet[1770]: I0430 03:22:00.189750 1770 scope.go:117] "RemoveContainer" containerID="f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5" Apr 30 03:22:00.190290 kubelet[1770]: E0430 03:22:00.189842 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:00.190290 kubelet[1770]: E0430 03:22:00.189968 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-fh28g_calico-system(d176cd5f-2562-4c26-93f6-67799a61f96e)\"" pod="calico-system/calico-node-fh28g" podUID="d176cd5f-2562-4c26-93f6-67799a61f96e" Apr 30 03:22:00.191192 containerd[1466]: time="2025-04-30T03:22:00.190795652Z" level=info msg="RemoveContainer for \"8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47\"" Apr 30 03:22:00.216272 containerd[1466]: time="2025-04-30T03:22:00.216204427Z" level=info msg="RemoveContainer for \"8a5c63cb5dc9f28b896dbe3f0d206a17f2192bf8c808edfd60423b8e381a8e47\" returns successfully" Apr 30 03:22:00.410137 kubelet[1770]: E0430 03:22:00.410049 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:01.410420 kubelet[1770]: E0430 03:22:01.410332 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:02.080890 containerd[1466]: time="2025-04-30T03:22:02.080708007Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:22:02.115494 containerd[1466]: time="2025-04-30T03:22:02.115395900Z" level=error msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" failed" error="failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:22:02.115701 kubelet[1770]: E0430 03:22:02.115639 1770 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:02.115794 kubelet[1770]: E0430 03:22:02.115708 1770 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631"} Apr 30 03:22:02.115794 kubelet[1770]: E0430 03:22:02.115765 1770 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a59f26ad-a021-4259-8f26-9e83377cfde3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:22:02.115950 kubelet[1770]: E0430 03:22:02.115797 1770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a59f26ad-a021-4259-8f26-9e83377cfde3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-sqxb2" podUID="a59f26ad-a021-4259-8f26-9e83377cfde3" Apr 30 03:22:02.411337 kubelet[1770]: E0430 03:22:02.411192 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:03.411545 kubelet[1770]: E0430 03:22:03.411455 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:03.701446 containerd[1466]: time="2025-04-30T03:22:03.701274242Z" level=info msg="StopPodSandbox for \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\"" Apr 30 03:22:03.701446 containerd[1466]: time="2025-04-30T03:22:03.701338192Z" level=info msg="Container to stop \"2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:22:03.701446 containerd[1466]: time="2025-04-30T03:22:03.701354773Z" level=info msg="Container to stop \"56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:22:03.701446 containerd[1466]: time="2025-04-30T03:22:03.701365092Z" level=info msg="Container to stop \"f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:22:03.703694 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82-shm.mount: Deactivated successfully. Apr 30 03:22:03.713014 systemd[1]: cri-containerd-af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82.scope: Deactivated successfully. Apr 30 03:22:03.733553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82-rootfs.mount: Deactivated successfully. Apr 30 03:22:03.740698 containerd[1466]: time="2025-04-30T03:22:03.740622634Z" level=info msg="shim disconnected" id=af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82 namespace=k8s.io Apr 30 03:22:03.740698 containerd[1466]: time="2025-04-30T03:22:03.740688518Z" level=warning msg="cleaning up after shim disconnected" id=af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82 namespace=k8s.io Apr 30 03:22:03.740698 containerd[1466]: time="2025-04-30T03:22:03.740697726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:22:03.757276 containerd[1466]: time="2025-04-30T03:22:03.757217919Z" level=info msg="TearDown network for sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" successfully" Apr 30 03:22:03.757276 containerd[1466]: time="2025-04-30T03:22:03.757259046Z" level=info msg="StopPodSandbox for \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" returns successfully" Apr 30 03:22:03.852346 kubelet[1770]: I0430 03:22:03.852291 1770 memory_manager.go:355] "RemoveStaleState removing state" podUID="d176cd5f-2562-4c26-93f6-67799a61f96e" containerName="calico-node" Apr 30 03:22:03.852346 kubelet[1770]: I0430 03:22:03.852322 1770 memory_manager.go:355] "RemoveStaleState removing state" podUID="d176cd5f-2562-4c26-93f6-67799a61f96e" containerName="calico-node" Apr 30 03:22:03.852346 kubelet[1770]: I0430 03:22:03.852328 1770 memory_manager.go:355] "RemoveStaleState removing state" podUID="d176cd5f-2562-4c26-93f6-67799a61f96e" containerName="calico-node" Apr 30 03:22:03.858524 systemd[1]: Created slice kubepods-besteffort-pod89ce5fa4_d209_40e7_8546_c353ddbf155a.slice - libcontainer container kubepods-besteffort-pod89ce5fa4_d209_40e7_8546_c353ddbf155a.slice. Apr 30 03:22:03.858944 kubelet[1770]: I0430 03:22:03.858788 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-bin-dir\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.858944 kubelet[1770]: I0430 03:22:03.858821 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d176cd5f-2562-4c26-93f6-67799a61f96e-node-certs\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.858944 kubelet[1770]: I0430 03:22:03.858835 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-log-dir\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.858944 kubelet[1770]: I0430 03:22:03.858849 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-xtables-lock\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.858944 kubelet[1770]: I0430 03:22:03.858862 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-run-calico\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.858944 kubelet[1770]: I0430 03:22:03.858874 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-policysync\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859201 kubelet[1770]: I0430 03:22:03.858898 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-flexvol-driver-host\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859201 kubelet[1770]: I0430 03:22:03.858916 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-lib-modules\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859201 kubelet[1770]: I0430 03:22:03.858932 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d176cd5f-2562-4c26-93f6-67799a61f96e-tigera-ca-bundle\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859201 kubelet[1770]: I0430 03:22:03.858950 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-net-dir\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859201 kubelet[1770]: I0430 03:22:03.858964 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvmgb\" (UniqueName: \"kubernetes.io/projected/d176cd5f-2562-4c26-93f6-67799a61f96e-kube-api-access-qvmgb\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859201 kubelet[1770]: I0430 03:22:03.858977 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-lib-calico\") pod \"d176cd5f-2562-4c26-93f6-67799a61f96e\" (UID: \"d176cd5f-2562-4c26-93f6-67799a61f96e\") " Apr 30 03:22:03.859434 kubelet[1770]: I0430 03:22:03.859019 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-cni-bin-dir\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859434 kubelet[1770]: I0430 03:22:03.859036 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/89ce5fa4-d209-40e7-8546-c353ddbf155a-node-certs\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859434 kubelet[1770]: I0430 03:22:03.859052 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqld4\" (UniqueName: \"kubernetes.io/projected/89ce5fa4-d209-40e7-8546-c353ddbf155a-kube-api-access-pqld4\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859434 kubelet[1770]: I0430 03:22:03.859075 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-policysync\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859434 kubelet[1770]: I0430 03:22:03.859113 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-var-run-calico\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859630 kubelet[1770]: I0430 03:22:03.859136 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-cni-log-dir\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859630 kubelet[1770]: I0430 03:22:03.859155 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-flexvol-driver-host\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.859630 kubelet[1770]: I0430 03:22:03.859161 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.859630 kubelet[1770]: I0430 03:22:03.859216 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.859630 kubelet[1770]: I0430 03:22:03.859501 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.859996 kubelet[1770]: I0430 03:22:03.859964 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.860055 kubelet[1770]: I0430 03:22:03.860011 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.860055 kubelet[1770]: I0430 03:22:03.860041 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.860130 kubelet[1770]: I0430 03:22:03.860069 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.860180 kubelet[1770]: I0430 03:22:03.860167 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.860229 kubelet[1770]: I0430 03:22:03.860201 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-policysync" (OuterVolumeSpecName: "policysync") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:22:03.860229 kubelet[1770]: I0430 03:22:03.859170 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-xtables-lock\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.860300 kubelet[1770]: I0430 03:22:03.860241 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-lib-modules\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.860300 kubelet[1770]: I0430 03:22:03.860269 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89ce5fa4-d209-40e7-8546-c353ddbf155a-tigera-ca-bundle\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.860300 kubelet[1770]: I0430 03:22:03.860289 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-cni-net-dir\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860314 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89ce5fa4-d209-40e7-8546-c353ddbf155a-var-lib-calico\") pod \"calico-node-vxww4\" (UID: \"89ce5fa4-d209-40e7-8546-c353ddbf155a\") " pod="calico-system/calico-node-vxww4" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860345 1770 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-xtables-lock\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860359 1770 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-lib-modules\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860370 1770 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-net-dir\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860380 1770 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-bin-dir\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860392 1770 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-cni-log-dir\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860409 kubelet[1770]: I0430 03:22:03.860404 1770 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-run-calico\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860649 kubelet[1770]: I0430 03:22:03.860415 1770 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-policysync\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860649 kubelet[1770]: I0430 03:22:03.860426 1770 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-flexvol-driver-host\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.860649 kubelet[1770]: I0430 03:22:03.860439 1770 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d176cd5f-2562-4c26-93f6-67799a61f96e-var-lib-calico\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.863322 kubelet[1770]: I0430 03:22:03.863260 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d176cd5f-2562-4c26-93f6-67799a61f96e-node-certs" (OuterVolumeSpecName: "node-certs") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 03:22:03.863596 kubelet[1770]: I0430 03:22:03.863537 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d176cd5f-2562-4c26-93f6-67799a61f96e-kube-api-access-qvmgb" (OuterVolumeSpecName: "kube-api-access-qvmgb") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "kube-api-access-qvmgb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 03:22:03.864721 systemd[1]: var-lib-kubelet-pods-d176cd5f\x2d2562\x2d4c26\x2d93f6\x2d67799a61f96e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqvmgb.mount: Deactivated successfully. Apr 30 03:22:03.865216 kubelet[1770]: I0430 03:22:03.864977 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d176cd5f-2562-4c26-93f6-67799a61f96e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d176cd5f-2562-4c26-93f6-67799a61f96e" (UID: "d176cd5f-2562-4c26-93f6-67799a61f96e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 03:22:03.864950 systemd[1]: var-lib-kubelet-pods-d176cd5f\x2d2562\x2d4c26\x2d93f6\x2d67799a61f96e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:22:03.867310 systemd[1]: var-lib-kubelet-pods-d176cd5f\x2d2562\x2d4c26\x2d93f6\x2d67799a61f96e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:22:03.961442 kubelet[1770]: I0430 03:22:03.961131 1770 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qvmgb\" (UniqueName: \"kubernetes.io/projected/d176cd5f-2562-4c26-93f6-67799a61f96e-kube-api-access-qvmgb\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.961442 kubelet[1770]: I0430 03:22:03.961184 1770 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d176cd5f-2562-4c26-93f6-67799a61f96e-tigera-ca-bundle\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:03.961442 kubelet[1770]: I0430 03:22:03.961193 1770 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d176cd5f-2562-4c26-93f6-67799a61f96e-node-certs\") on node \"10.0.0.31\" DevicePath \"\"" Apr 30 03:22:04.163066 kubelet[1770]: E0430 03:22:04.163008 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:04.163696 containerd[1466]: time="2025-04-30T03:22:04.163628355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vxww4,Uid:89ce5fa4-d209-40e7-8546-c353ddbf155a,Namespace:calico-system,Attempt:0,}" Apr 30 03:22:04.190166 containerd[1466]: time="2025-04-30T03:22:04.190026603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:22:04.190166 containerd[1466]: time="2025-04-30T03:22:04.190101434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:22:04.190166 containerd[1466]: time="2025-04-30T03:22:04.190118315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:04.190408 containerd[1466]: time="2025-04-30T03:22:04.190229245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:04.203883 kubelet[1770]: I0430 03:22:04.203833 1770 scope.go:117] "RemoveContainer" containerID="f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5" Apr 30 03:22:04.205826 containerd[1466]: time="2025-04-30T03:22:04.205782550Z" level=info msg="RemoveContainer for \"f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5\"" Apr 30 03:22:04.209740 containerd[1466]: time="2025-04-30T03:22:04.209691508Z" level=info msg="RemoveContainer for \"f18751a8b0f3a5cb17c96c5f01d0f346deb1e78ab07b96a3b8ea6533fa60b2c5\" returns successfully" Apr 30 03:22:04.209882 kubelet[1770]: I0430 03:22:04.209856 1770 scope.go:117] "RemoveContainer" containerID="56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f" Apr 30 03:22:04.210674 containerd[1466]: time="2025-04-30T03:22:04.210651615Z" level=info msg="RemoveContainer for \"56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f\"" Apr 30 03:22:04.210933 systemd[1]: Started cri-containerd-aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06.scope - libcontainer container aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06. Apr 30 03:22:04.212554 systemd[1]: Removed slice kubepods-besteffort-podd176cd5f_2562_4c26_93f6_67799a61f96e.slice - libcontainer container kubepods-besteffort-podd176cd5f_2562_4c26_93f6_67799a61f96e.slice. Apr 30 03:22:04.212646 systemd[1]: kubepods-besteffort-podd176cd5f_2562_4c26_93f6_67799a61f96e.slice: Consumed 1.848s CPU time. Apr 30 03:22:04.215803 containerd[1466]: time="2025-04-30T03:22:04.215759709Z" level=info msg="RemoveContainer for \"56b1c3b3a2f65b6c9f9f4aa09e17582259be6ddd2aad2a33456d980215543b6f\" returns successfully" Apr 30 03:22:04.216053 kubelet[1770]: I0430 03:22:04.215991 1770 scope.go:117] "RemoveContainer" containerID="2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd" Apr 30 03:22:04.217762 containerd[1466]: time="2025-04-30T03:22:04.217299567Z" level=info msg="RemoveContainer for \"2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd\"" Apr 30 03:22:04.221456 containerd[1466]: time="2025-04-30T03:22:04.221402931Z" level=info msg="RemoveContainer for \"2ff74e398130f84e4bd2c2115eb96d4c179b63198070e7c2fd51468da184b5dd\" returns successfully" Apr 30 03:22:04.234365 containerd[1466]: time="2025-04-30T03:22:04.234296771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vxww4,Uid:89ce5fa4-d209-40e7-8546-c353ddbf155a,Namespace:calico-system,Attempt:0,} returns sandbox id \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\"" Apr 30 03:22:04.235146 kubelet[1770]: E0430 03:22:04.235114 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:04.236851 containerd[1466]: time="2025-04-30T03:22:04.236814680Z" level=info msg="CreateContainer within sandbox \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:22:04.255210 containerd[1466]: time="2025-04-30T03:22:04.255137869Z" level=info msg="CreateContainer within sandbox \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e\"" Apr 30 03:22:04.255769 containerd[1466]: time="2025-04-30T03:22:04.255709946Z" level=info msg="StartContainer for \"547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e\"" Apr 30 03:22:04.285003 systemd[1]: Started cri-containerd-547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e.scope - libcontainer container 547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e. Apr 30 03:22:04.317165 containerd[1466]: time="2025-04-30T03:22:04.317082995Z" level=info msg="StartContainer for \"547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e\" returns successfully" Apr 30 03:22:04.336241 systemd[1]: cri-containerd-547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e.scope: Deactivated successfully. Apr 30 03:22:04.388570 containerd[1466]: time="2025-04-30T03:22:04.388495482Z" level=info msg="shim disconnected" id=547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e namespace=k8s.io Apr 30 03:22:04.388570 containerd[1466]: time="2025-04-30T03:22:04.388564602Z" level=warning msg="cleaning up after shim disconnected" id=547404e310e61c5f7a6524285bc8f9f84cf3feef5bdf02e0afda359cfec6314e namespace=k8s.io Apr 30 03:22:04.388570 containerd[1466]: time="2025-04-30T03:22:04.388575672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:22:04.412686 kubelet[1770]: E0430 03:22:04.412606 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:05.083611 kubelet[1770]: I0430 03:22:05.083503 1770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d176cd5f-2562-4c26-93f6-67799a61f96e" path="/var/lib/kubelet/pods/d176cd5f-2562-4c26-93f6-67799a61f96e/volumes" Apr 30 03:22:05.205582 kubelet[1770]: E0430 03:22:05.205541 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:05.207620 containerd[1466]: time="2025-04-30T03:22:05.207569667Z" level=info msg="CreateContainer within sandbox \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:22:05.232142 containerd[1466]: time="2025-04-30T03:22:05.232059393Z" level=info msg="CreateContainer within sandbox \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421\"" Apr 30 03:22:05.232956 containerd[1466]: time="2025-04-30T03:22:05.232839360Z" level=info msg="StartContainer for \"ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421\"" Apr 30 03:22:05.268900 systemd[1]: Started cri-containerd-ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421.scope - libcontainer container ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421. Apr 30 03:22:05.306418 containerd[1466]: time="2025-04-30T03:22:05.306347833Z" level=info msg="StartContainer for \"ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421\" returns successfully" Apr 30 03:22:05.413171 kubelet[1770]: E0430 03:22:05.413067 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:05.705797 systemd[1]: cri-containerd-ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421.scope: Deactivated successfully. Apr 30 03:22:05.729021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421-rootfs.mount: Deactivated successfully. Apr 30 03:22:05.869940 containerd[1466]: time="2025-04-30T03:22:05.869840918Z" level=info msg="shim disconnected" id=ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421 namespace=k8s.io Apr 30 03:22:05.869940 containerd[1466]: time="2025-04-30T03:22:05.869915638Z" level=warning msg="cleaning up after shim disconnected" id=ac97b5749d96cdea6ddb1f9139d0d61f28835c2a206562a8da5192d489e83421 namespace=k8s.io Apr 30 03:22:05.869940 containerd[1466]: time="2025-04-30T03:22:05.869924626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:22:06.211313 kubelet[1770]: E0430 03:22:06.211257 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:06.223073 containerd[1466]: time="2025-04-30T03:22:06.223017814Z" level=info msg="CreateContainer within sandbox \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:22:06.413946 kubelet[1770]: E0430 03:22:06.413834 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:06.577185 containerd[1466]: time="2025-04-30T03:22:06.576984869Z" level=info msg="CreateContainer within sandbox \"aab2ab59a5e89e5b0351c204e9c5bc9a25197673ff9296ecd727d0b9e3d09c06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6098c89a55ab891a3c85e32fc49efc16cb0751e61386115eea2c7dda2c9a002e\"" Apr 30 03:22:06.577962 containerd[1466]: time="2025-04-30T03:22:06.577890092Z" level=info msg="StartContainer for \"6098c89a55ab891a3c85e32fc49efc16cb0751e61386115eea2c7dda2c9a002e\"" Apr 30 03:22:06.614937 systemd[1]: Started cri-containerd-6098c89a55ab891a3c85e32fc49efc16cb0751e61386115eea2c7dda2c9a002e.scope - libcontainer container 6098c89a55ab891a3c85e32fc49efc16cb0751e61386115eea2c7dda2c9a002e. Apr 30 03:22:06.688399 containerd[1466]: time="2025-04-30T03:22:06.688306847Z" level=info msg="StartContainer for \"6098c89a55ab891a3c85e32fc49efc16cb0751e61386115eea2c7dda2c9a002e\" returns successfully" Apr 30 03:22:07.215016 kubelet[1770]: E0430 03:22:07.214980 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:07.414970 kubelet[1770]: E0430 03:22:07.414911 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:07.612522 kubelet[1770]: I0430 03:22:07.612344 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vxww4" podStartSLOduration=4.612313148 podStartE2EDuration="4.612313148s" podCreationTimestamp="2025-04-30 03:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:22:07.612185058 +0000 UTC m=+51.812953777" watchObservedRunningTime="2025-04-30 03:22:07.612313148 +0000 UTC m=+51.813081867" Apr 30 03:22:08.217538 kubelet[1770]: E0430 03:22:08.217476 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:08.238644 systemd[1]: run-containerd-runc-k8s.io-6098c89a55ab891a3c85e32fc49efc16cb0751e61386115eea2c7dda2c9a002e-runc.yd0XWR.mount: Deactivated successfully. Apr 30 03:22:08.415505 kubelet[1770]: E0430 03:22:08.415425 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:09.081285 containerd[1466]: time="2025-04-30T03:22:09.081206005Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.134 [INFO][2988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.135 [INFO][2988] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" iface="eth0" netns="/var/run/netns/cni-d8d9bce9-323f-94fd-73a8-3a7279574fdf" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.135 [INFO][2988] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" iface="eth0" netns="/var/run/netns/cni-d8d9bce9-323f-94fd-73a8-3a7279574fdf" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.135 [INFO][2988] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" iface="eth0" netns="/var/run/netns/cni-d8d9bce9-323f-94fd-73a8-3a7279574fdf" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.135 [INFO][2988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.135 [INFO][2988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.164 [INFO][2997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.164 [INFO][2997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.164 [INFO][2997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.285 [WARNING][2997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.285 [INFO][2997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.287 [INFO][2997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:09.292541 containerd[1466]: 2025-04-30 03:22:09.289 [INFO][2988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:09.293046 containerd[1466]: time="2025-04-30T03:22:09.292795670Z" level=info msg="TearDown network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" successfully" Apr 30 03:22:09.293046 containerd[1466]: time="2025-04-30T03:22:09.292852438Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" returns successfully" Apr 30 03:22:09.293652 containerd[1466]: time="2025-04-30T03:22:09.293623066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z88pf,Uid:6562acde-1c3c-4d40-a30d-35106d6fab16,Namespace:calico-system,Attempt:1,}" Apr 30 03:22:09.295054 systemd[1]: run-netns-cni\x2dd8d9bce9\x2d323f\x2d94fd\x2d73a8\x2d3a7279574fdf.mount: Deactivated successfully. Apr 30 03:22:09.416628 kubelet[1770]: E0430 03:22:09.416552 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:09.659790 kernel: bpftool[3157]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:22:09.774971 systemd-networkd[1404]: caliae3db18a354: Link UP Apr 30 03:22:09.775205 systemd-networkd[1404]: caliae3db18a354: Gained carrier Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.534 [INFO][3101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.592 [INFO][3101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.31-k8s-csi--node--driver--z88pf-eth0 csi-node-driver- calico-system 6562acde-1c3c-4d40-a30d-35106d6fab16 1115 0 2025-04-30 03:21:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.31 csi-node-driver-z88pf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliae3db18a354 [] []}} ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.592 [INFO][3101] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.639 [INFO][3140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" HandleID="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.718 [INFO][3140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" HandleID="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df3b0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.31", "pod":"csi-node-driver-z88pf", "timestamp":"2025-04-30 03:22:09.639183095 +0000 UTC"}, Hostname:"10.0.0.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.718 [INFO][3140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.718 [INFO][3140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.718 [INFO][3140] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.31' Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.722 [INFO][3140] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.729 [INFO][3140] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.737 [INFO][3140] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.740 [INFO][3140] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.743 [INFO][3140] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.743 [INFO][3140] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.745 [INFO][3140] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.752 [INFO][3140] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.760 [INFO][3140] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.1/26] block=192.168.54.0/26 handle="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.761 [INFO][3140] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.1/26] handle="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" host="10.0.0.31" Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.761 [INFO][3140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:09.789521 containerd[1466]: 2025-04-30 03:22:09.761 [INFO][3140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.1/26] IPv6=[] ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" HandleID="k8s-pod-network.3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.790416 containerd[1466]: 2025-04-30 03:22:09.764 [INFO][3101] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-csi--node--driver--z88pf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6562acde-1c3c-4d40-a30d-35106d6fab16", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"", Pod:"csi-node-driver-z88pf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae3db18a354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:09.790416 containerd[1466]: 2025-04-30 03:22:09.764 [INFO][3101] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.1/32] ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.790416 containerd[1466]: 2025-04-30 03:22:09.764 [INFO][3101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae3db18a354 ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.790416 containerd[1466]: 2025-04-30 03:22:09.772 [INFO][3101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.790416 containerd[1466]: 2025-04-30 03:22:09.772 [INFO][3101] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-csi--node--driver--z88pf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6562acde-1c3c-4d40-a30d-35106d6fab16", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad", Pod:"csi-node-driver-z88pf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae3db18a354", MAC:"62:9e:54:2c:79:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:09.790416 containerd[1466]: 2025-04-30 03:22:09.786 [INFO][3101] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad" Namespace="calico-system" Pod="csi-node-driver-z88pf" WorkloadEndpoint="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:09.818981 containerd[1466]: time="2025-04-30T03:22:09.818716698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:22:09.818981 containerd[1466]: time="2025-04-30T03:22:09.818878773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:22:09.818981 containerd[1466]: time="2025-04-30T03:22:09.818899462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:09.819407 containerd[1466]: time="2025-04-30T03:22:09.819196761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:09.850073 systemd[1]: Started cri-containerd-3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad.scope - libcontainer container 3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad. Apr 30 03:22:09.867158 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:22:09.880338 containerd[1466]: time="2025-04-30T03:22:09.880284914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z88pf,Uid:6562acde-1c3c-4d40-a30d-35106d6fab16,Namespace:calico-system,Attempt:1,} returns sandbox id \"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad\"" Apr 30 03:22:09.883028 containerd[1466]: time="2025-04-30T03:22:09.882963731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:22:09.940709 systemd-networkd[1404]: vxlan.calico: Link UP Apr 30 03:22:09.940722 systemd-networkd[1404]: vxlan.calico: Gained carrier Apr 30 03:22:10.417374 kubelet[1770]: E0430 03:22:10.417291 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:11.081976 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Apr 30 03:22:11.418386 kubelet[1770]: E0430 03:22:11.418329 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:11.658039 systemd-networkd[1404]: caliae3db18a354: Gained IPv6LL Apr 30 03:22:12.419381 kubelet[1770]: E0430 03:22:12.419302 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:13.420260 kubelet[1770]: E0430 03:22:13.420204 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:13.499419 containerd[1466]: time="2025-04-30T03:22:13.499341126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:13.538983 containerd[1466]: time="2025-04-30T03:22:13.538897116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:22:13.551293 containerd[1466]: time="2025-04-30T03:22:13.551219525Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:13.573520 containerd[1466]: time="2025-04-30T03:22:13.573452428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:13.574368 containerd[1466]: time="2025-04-30T03:22:13.574317915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.691282128s" Apr 30 03:22:13.574419 containerd[1466]: time="2025-04-30T03:22:13.574365695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:22:13.577010 containerd[1466]: time="2025-04-30T03:22:13.576967825Z" level=info msg="CreateContainer within sandbox \"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:22:14.080803 containerd[1466]: time="2025-04-30T03:22:14.080757184Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:22:14.161991 containerd[1466]: time="2025-04-30T03:22:14.161901256Z" level=info msg="CreateContainer within sandbox \"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"94727487db104e661a1d3c1cf78aa9d121bfead6eccb843bb2f0956a19cfe0bb\"" Apr 30 03:22:14.162650 containerd[1466]: time="2025-04-30T03:22:14.162601282Z" level=info msg="StartContainer for \"94727487db104e661a1d3c1cf78aa9d121bfead6eccb843bb2f0956a19cfe0bb\"" Apr 30 03:22:14.241061 systemd[1]: Started cri-containerd-94727487db104e661a1d3c1cf78aa9d121bfead6eccb843bb2f0956a19cfe0bb.scope - libcontainer container 94727487db104e661a1d3c1cf78aa9d121bfead6eccb843bb2f0956a19cfe0bb. Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.203 [INFO][3311] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.203 [INFO][3311] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" iface="eth0" netns="/var/run/netns/cni-ab99cda0-9b4b-692a-3c3f-332ff6f4182b" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.203 [INFO][3311] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" iface="eth0" netns="/var/run/netns/cni-ab99cda0-9b4b-692a-3c3f-332ff6f4182b" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.204 [INFO][3311] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" iface="eth0" netns="/var/run/netns/cni-ab99cda0-9b4b-692a-3c3f-332ff6f4182b" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.204 [INFO][3311] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.204 [INFO][3311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.275 [INFO][3336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.275 [INFO][3336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.275 [INFO][3336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.283 [WARNING][3336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.283 [INFO][3336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.286 [INFO][3336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:14.291109 containerd[1466]: 2025-04-30 03:22:14.288 [INFO][3311] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:14.291780 containerd[1466]: time="2025-04-30T03:22:14.291353075Z" level=info msg="TearDown network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" successfully" Apr 30 03:22:14.291780 containerd[1466]: time="2025-04-30T03:22:14.291387249Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" returns successfully" Apr 30 03:22:14.292299 containerd[1466]: time="2025-04-30T03:22:14.292257544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-sqxb2,Uid:a59f26ad-a021-4259-8f26-9e83377cfde3,Namespace:default,Attempt:1,}" Apr 30 03:22:14.293196 systemd[1]: run-netns-cni\x2dab99cda0\x2d9b4b\x2d692a\x2d3c3f\x2d332ff6f4182b.mount: Deactivated successfully. Apr 30 03:22:14.367620 containerd[1466]: time="2025-04-30T03:22:14.367411930Z" level=info msg="StartContainer for \"94727487db104e661a1d3c1cf78aa9d121bfead6eccb843bb2f0956a19cfe0bb\" returns successfully" Apr 30 03:22:14.369113 containerd[1466]: time="2025-04-30T03:22:14.369040290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:22:14.420846 kubelet[1770]: E0430 03:22:14.420763 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:14.754424 systemd-networkd[1404]: cali684e13d43bf: Link UP Apr 30 03:22:14.754720 systemd-networkd[1404]: cali684e13d43bf: Gained carrier Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.633 [INFO][3361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0 nginx-deployment-7fcdb87857- default a59f26ad-a021-4259-8f26-9e83377cfde3 1152 0 2025-04-30 03:21:34 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.31 nginx-deployment-7fcdb87857-sqxb2 eth0 default [] [] [kns.default ksa.default.default] cali684e13d43bf [] []}} ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.633 [INFO][3361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.665 [INFO][3376] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" HandleID="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.692 [INFO][3376] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" HandleID="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005bbd20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.31", "pod":"nginx-deployment-7fcdb87857-sqxb2", "timestamp":"2025-04-30 03:22:14.665974691 +0000 UTC"}, Hostname:"10.0.0.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.692 [INFO][3376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.693 [INFO][3376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.693 [INFO][3376] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.31' Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.697 [INFO][3376] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.706 [INFO][3376] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.716 [INFO][3376] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.724 [INFO][3376] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.728 [INFO][3376] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.728 [INFO][3376] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.731 [INFO][3376] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4 Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.738 [INFO][3376] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.748 [INFO][3376] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.2/26] block=192.168.54.0/26 handle="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.748 [INFO][3376] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.2/26] handle="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" host="10.0.0.31" Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.748 [INFO][3376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:14.767330 containerd[1466]: 2025-04-30 03:22:14.748 [INFO][3376] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.2/26] IPv6=[] ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" HandleID="k8s-pod-network.2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.768428 containerd[1466]: 2025-04-30 03:22:14.751 [INFO][3361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a59f26ad-a021-4259-8f26-9e83377cfde3", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-sqxb2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali684e13d43bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:14.768428 containerd[1466]: 2025-04-30 03:22:14.751 [INFO][3361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.2/32] ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.768428 containerd[1466]: 2025-04-30 03:22:14.751 [INFO][3361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali684e13d43bf ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.768428 containerd[1466]: 2025-04-30 03:22:14.755 [INFO][3361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.768428 containerd[1466]: 2025-04-30 03:22:14.755 [INFO][3361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a59f26ad-a021-4259-8f26-9e83377cfde3", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4", Pod:"nginx-deployment-7fcdb87857-sqxb2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali684e13d43bf", MAC:"be:5b:52:d1:4d:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:14.768428 containerd[1466]: 2025-04-30 03:22:14.764 [INFO][3361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4" Namespace="default" Pod="nginx-deployment-7fcdb87857-sqxb2" WorkloadEndpoint="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:14.797084 containerd[1466]: time="2025-04-30T03:22:14.796720723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:22:14.797084 containerd[1466]: time="2025-04-30T03:22:14.797012441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:22:14.797084 containerd[1466]: time="2025-04-30T03:22:14.797034101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:14.797354 containerd[1466]: time="2025-04-30T03:22:14.797168483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:14.826249 systemd[1]: Started cri-containerd-2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4.scope - libcontainer container 2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4. Apr 30 03:22:14.844417 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:22:14.876749 containerd[1466]: time="2025-04-30T03:22:14.876604665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-sqxb2,Uid:a59f26ad-a021-4259-8f26-9e83377cfde3,Namespace:default,Attempt:1,} returns sandbox id \"2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4\"" Apr 30 03:22:15.421244 kubelet[1770]: E0430 03:22:15.421180 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:16.382121 kubelet[1770]: E0430 03:22:16.382052 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:16.396092 containerd[1466]: time="2025-04-30T03:22:16.396037219Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:22:16.422293 kubelet[1770]: E0430 03:22:16.422227 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.433 [WARNING][3460] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-csi--node--driver--z88pf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6562acde-1c3c-4d40-a30d-35106d6fab16", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad", Pod:"csi-node-driver-z88pf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae3db18a354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.433 [INFO][3460] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.433 [INFO][3460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" iface="eth0" netns="" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.433 [INFO][3460] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.433 [INFO][3460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.456 [INFO][3468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.456 [INFO][3468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.457 [INFO][3468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.462 [WARNING][3468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.462 [INFO][3468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.464 [INFO][3468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:16.469177 containerd[1466]: 2025-04-30 03:22:16.466 [INFO][3460] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.469669 containerd[1466]: time="2025-04-30T03:22:16.469227583Z" level=info msg="TearDown network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" successfully" Apr 30 03:22:16.469669 containerd[1466]: time="2025-04-30T03:22:16.469263130Z" level=info msg="StopPodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" returns successfully" Apr 30 03:22:16.470002 containerd[1466]: time="2025-04-30T03:22:16.469961542Z" level=info msg="RemovePodSandbox for \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:22:16.470036 containerd[1466]: time="2025-04-30T03:22:16.470006177Z" level=info msg="Forcibly stopping sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\"" Apr 30 03:22:16.522022 systemd-networkd[1404]: cali684e13d43bf: Gained IPv6LL Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.651 [WARNING][3490] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-csi--node--driver--z88pf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6562acde-1c3c-4d40-a30d-35106d6fab16", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad", Pod:"csi-node-driver-z88pf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae3db18a354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.651 [INFO][3490] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.651 [INFO][3490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" iface="eth0" netns="" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.651 [INFO][3490] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.651 [INFO][3490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.698 [INFO][3498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.698 [INFO][3498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.698 [INFO][3498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.704 [WARNING][3498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.704 [INFO][3498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" HandleID="k8s-pod-network.262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Workload="10.0.0.31-k8s-csi--node--driver--z88pf-eth0" Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.705 [INFO][3498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:16.710256 containerd[1466]: 2025-04-30 03:22:16.707 [INFO][3490] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5" Apr 30 03:22:16.710677 containerd[1466]: time="2025-04-30T03:22:16.710256040Z" level=info msg="TearDown network for sandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" successfully" Apr 30 03:22:17.142825 containerd[1466]: time="2025-04-30T03:22:17.142761734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:22:17.142965 containerd[1466]: time="2025-04-30T03:22:17.142850561Z" level=info msg="RemovePodSandbox \"262bd4fe293495a198321f96d7ee8fc2fd8789f2a18c1a5b6114e259f7ed64d5\" returns successfully" Apr 30 03:22:17.144234 containerd[1466]: time="2025-04-30T03:22:17.143533804Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.245 [WARNING][3521] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a59f26ad-a021-4259-8f26-9e83377cfde3", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4", Pod:"nginx-deployment-7fcdb87857-sqxb2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali684e13d43bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.245 [INFO][3521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.245 [INFO][3521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" iface="eth0" netns="" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.245 [INFO][3521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.245 [INFO][3521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.266 [INFO][3531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.266 [INFO][3531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.266 [INFO][3531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.273 [WARNING][3531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.273 [INFO][3531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.274 [INFO][3531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:17.279526 containerd[1466]: 2025-04-30 03:22:17.277 [INFO][3521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.280259 containerd[1466]: time="2025-04-30T03:22:17.279585713Z" level=info msg="TearDown network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" successfully" Apr 30 03:22:17.280259 containerd[1466]: time="2025-04-30T03:22:17.279617793Z" level=info msg="StopPodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" returns successfully" Apr 30 03:22:17.280259 containerd[1466]: time="2025-04-30T03:22:17.280198134Z" level=info msg="RemovePodSandbox for \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:22:17.280259 containerd[1466]: time="2025-04-30T03:22:17.280225986Z" level=info msg="Forcibly stopping sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\"" Apr 30 03:22:17.423457 kubelet[1770]: E0430 03:22:17.423284 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.536 [WARNING][3553] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a59f26ad-a021-4259-8f26-9e83377cfde3", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4", Pod:"nginx-deployment-7fcdb87857-sqxb2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali684e13d43bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.536 [INFO][3553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.536 [INFO][3553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" iface="eth0" netns="" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.536 [INFO][3553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.536 [INFO][3553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.561 [INFO][3562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.561 [INFO][3562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.561 [INFO][3562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.568 [WARNING][3562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.568 [INFO][3562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" HandleID="k8s-pod-network.f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Workload="10.0.0.31-k8s-nginx--deployment--7fcdb87857--sqxb2-eth0" Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.570 [INFO][3562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:17.575560 containerd[1466]: 2025-04-30 03:22:17.573 [INFO][3553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631" Apr 30 03:22:17.576445 containerd[1466]: time="2025-04-30T03:22:17.575618252Z" level=info msg="TearDown network for sandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" successfully" Apr 30 03:22:17.609101 containerd[1466]: time="2025-04-30T03:22:17.608920090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:22:17.609101 containerd[1466]: time="2025-04-30T03:22:17.609010982Z" level=info msg="RemovePodSandbox \"f638743cd4e96b2b661f2b9e1120cc54710018522c1aea22574905419e727631\" returns successfully" Apr 30 03:22:17.610006 containerd[1466]: time="2025-04-30T03:22:17.609969612Z" level=info msg="StopPodSandbox for \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\"" Apr 30 03:22:17.610112 containerd[1466]: time="2025-04-30T03:22:17.610082384Z" level=info msg="TearDown network for sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" successfully" Apr 30 03:22:17.610112 containerd[1466]: time="2025-04-30T03:22:17.610103514Z" level=info msg="StopPodSandbox for \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" returns successfully" Apr 30 03:22:17.611124 containerd[1466]: time="2025-04-30T03:22:17.611044090Z" level=info msg="RemovePodSandbox for \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\"" Apr 30 03:22:17.611124 containerd[1466]: time="2025-04-30T03:22:17.611075620Z" level=info msg="Forcibly stopping sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\"" Apr 30 03:22:17.611224 containerd[1466]: time="2025-04-30T03:22:17.611134742Z" level=info msg="TearDown network for sandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" successfully" Apr 30 03:22:17.763117 containerd[1466]: time="2025-04-30T03:22:17.762897133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:22:17.763117 containerd[1466]: time="2025-04-30T03:22:17.762999545Z" level=info msg="RemovePodSandbox \"af65e8b08b598ddb4d96f57c6812895796f172bfe3a458dee0d747c98cc13f82\" returns successfully" Apr 30 03:22:17.949065 containerd[1466]: time="2025-04-30T03:22:17.949008861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:17.982509 containerd[1466]: time="2025-04-30T03:22:17.982407691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:22:18.004241 containerd[1466]: time="2025-04-30T03:22:18.004157470Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:18.026477 containerd[1466]: time="2025-04-30T03:22:18.026322292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:18.027229 containerd[1466]: time="2025-04-30T03:22:18.027178331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.658081094s" Apr 30 03:22:18.027229 containerd[1466]: time="2025-04-30T03:22:18.027219228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:22:18.028391 containerd[1466]: time="2025-04-30T03:22:18.028356043Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 03:22:18.029742 containerd[1466]: time="2025-04-30T03:22:18.029679208Z" level=info msg="CreateContainer within sandbox \"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:22:18.338075 containerd[1466]: time="2025-04-30T03:22:18.337885963Z" level=info msg="CreateContainer within sandbox \"3764de4a5d38d727c96e16f4166fb598a8be2b47843aa3eedb5c1753de5216ad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1d70becd2f8c3ff55116f796059803fa495d62b5a09d6a760da8d49fcb24e13c\"" Apr 30 03:22:18.338610 containerd[1466]: time="2025-04-30T03:22:18.338511028Z" level=info msg="StartContainer for \"1d70becd2f8c3ff55116f796059803fa495d62b5a09d6a760da8d49fcb24e13c\"" Apr 30 03:22:18.371894 systemd[1]: Started cri-containerd-1d70becd2f8c3ff55116f796059803fa495d62b5a09d6a760da8d49fcb24e13c.scope - libcontainer container 1d70becd2f8c3ff55116f796059803fa495d62b5a09d6a760da8d49fcb24e13c. Apr 30 03:22:18.424159 kubelet[1770]: E0430 03:22:18.424111 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:18.718436 containerd[1466]: time="2025-04-30T03:22:18.718347889Z" level=info msg="StartContainer for \"1d70becd2f8c3ff55116f796059803fa495d62b5a09d6a760da8d49fcb24e13c\" returns successfully" Apr 30 03:22:19.139718 kubelet[1770]: I0430 03:22:19.139682 1770 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:22:19.139718 kubelet[1770]: I0430 03:22:19.139719 1770 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:22:19.305082 kubelet[1770]: I0430 03:22:19.304998 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z88pf" podStartSLOduration=54.159055274 podStartE2EDuration="1m2.304975218s" podCreationTimestamp="2025-04-30 03:21:17 +0000 UTC" firstStartedPulling="2025-04-30 03:22:09.882318689 +0000 UTC m=+54.083087408" lastFinishedPulling="2025-04-30 03:22:18.028238633 +0000 UTC m=+62.229007352" observedRunningTime="2025-04-30 03:22:19.304893324 +0000 UTC m=+63.505662053" watchObservedRunningTime="2025-04-30 03:22:19.304975218 +0000 UTC m=+63.505743927" Apr 30 03:22:19.424713 kubelet[1770]: E0430 03:22:19.424551 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:20.425297 kubelet[1770]: E0430 03:22:20.425215 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:21.426474 kubelet[1770]: E0430 03:22:21.426407 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:22.427185 kubelet[1770]: E0430 03:22:22.427121 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:23.231759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047462525.mount: Deactivated successfully. Apr 30 03:22:23.427555 kubelet[1770]: E0430 03:22:23.427483 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:24.149432 containerd[1466]: time="2025-04-30T03:22:24.149339311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:24.150284 containerd[1466]: time="2025-04-30T03:22:24.150225385Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306276" Apr 30 03:22:24.151930 containerd[1466]: time="2025-04-30T03:22:24.151871065Z" level=info msg="ImageCreate event name:\"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:24.156807 containerd[1466]: time="2025-04-30T03:22:24.156718817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:24.157863 containerd[1466]: time="2025-04-30T03:22:24.157815876Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"73306154\" in 6.129417163s" Apr 30 03:22:24.157863 containerd[1466]: time="2025-04-30T03:22:24.157857515Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\"" Apr 30 03:22:24.160145 containerd[1466]: time="2025-04-30T03:22:24.160109233Z" level=info msg="CreateContainer within sandbox \"2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Apr 30 03:22:24.179991 containerd[1466]: time="2025-04-30T03:22:24.179911882Z" level=info msg="CreateContainer within sandbox \"2302cdc11ed6c356629bafb2f3ad5f7d54eb2cecc3ca9f3bfbed47102f8bc4e4\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f8a6d2fd802ee41050e91f20a659a5c3c99e2317b13e3bc91614d1208ffd3bf8\"" Apr 30 03:22:24.180776 containerd[1466]: time="2025-04-30T03:22:24.180702737Z" level=info msg="StartContainer for \"f8a6d2fd802ee41050e91f20a659a5c3c99e2317b13e3bc91614d1208ffd3bf8\"" Apr 30 03:22:24.272964 systemd[1]: Started cri-containerd-f8a6d2fd802ee41050e91f20a659a5c3c99e2317b13e3bc91614d1208ffd3bf8.scope - libcontainer container f8a6d2fd802ee41050e91f20a659a5c3c99e2317b13e3bc91614d1208ffd3bf8. Apr 30 03:22:24.417056 containerd[1466]: time="2025-04-30T03:22:24.416882587Z" level=info msg="StartContainer for \"f8a6d2fd802ee41050e91f20a659a5c3c99e2317b13e3bc91614d1208ffd3bf8\" returns successfully" Apr 30 03:22:24.427749 kubelet[1770]: E0430 03:22:24.427676 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:25.428492 kubelet[1770]: E0430 03:22:25.428404 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:26.429100 kubelet[1770]: E0430 03:22:26.429018 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:27.429670 kubelet[1770]: E0430 03:22:27.429587 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:28.430756 kubelet[1770]: E0430 03:22:28.430669 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:29.431673 kubelet[1770]: E0430 03:22:29.431614 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:29.570209 kubelet[1770]: I0430 03:22:29.570135 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-sqxb2" podStartSLOduration=46.289848524 podStartE2EDuration="55.570110339s" podCreationTimestamp="2025-04-30 03:21:34 +0000 UTC" firstStartedPulling="2025-04-30 03:22:14.878596769 +0000 UTC m=+59.079365488" lastFinishedPulling="2025-04-30 03:22:24.158858584 +0000 UTC m=+68.359627303" observedRunningTime="2025-04-30 03:22:25.276655671 +0000 UTC m=+69.477424390" watchObservedRunningTime="2025-04-30 03:22:29.570110339 +0000 UTC m=+73.770879058" Apr 30 03:22:29.576487 systemd[1]: Created slice kubepods-besteffort-pod6c96c182_dc13_4414_839a_f8d2a037e8aa.slice - libcontainer container kubepods-besteffort-pod6c96c182_dc13_4414_839a_f8d2a037e8aa.slice. Apr 30 03:22:29.715002 kubelet[1770]: I0430 03:22:29.714800 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbz69\" (UniqueName: \"kubernetes.io/projected/6c96c182-dc13-4414-839a-f8d2a037e8aa-kube-api-access-kbz69\") pod \"nfs-server-provisioner-0\" (UID: \"6c96c182-dc13-4414-839a-f8d2a037e8aa\") " pod="default/nfs-server-provisioner-0" Apr 30 03:22:29.715002 kubelet[1770]: I0430 03:22:29.714870 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6c96c182-dc13-4414-839a-f8d2a037e8aa-data\") pod \"nfs-server-provisioner-0\" (UID: \"6c96c182-dc13-4414-839a-f8d2a037e8aa\") " pod="default/nfs-server-provisioner-0" Apr 30 03:22:29.879339 containerd[1466]: time="2025-04-30T03:22:29.879282991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6c96c182-dc13-4414-839a-f8d2a037e8aa,Namespace:default,Attempt:0,}" Apr 30 03:22:30.100414 systemd-networkd[1404]: cali60e51b789ff: Link UP Apr 30 03:22:30.101203 systemd-networkd[1404]: cali60e51b789ff: Gained carrier Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.010 [INFO][3727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.31-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6c96c182-dc13-4414-839a-f8d2a037e8aa 1312 0 2025-04-30 03:22:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.31 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.011 [INFO][3727] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.047 [INFO][3741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" HandleID="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Workload="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.060 [INFO][3741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" HandleID="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Workload="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000288160), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.31", "pod":"nfs-server-provisioner-0", "timestamp":"2025-04-30 03:22:30.047933687 +0000 UTC"}, Hostname:"10.0.0.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.061 [INFO][3741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.061 [INFO][3741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.061 [INFO][3741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.31' Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.063 [INFO][3741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.068 [INFO][3741] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.073 [INFO][3741] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.075 [INFO][3741] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.078 [INFO][3741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.078 [INFO][3741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.080 [INFO][3741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674 Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.086 [INFO][3741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.094 [INFO][3741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.3/26] block=192.168.54.0/26 handle="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.094 [INFO][3741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.3/26] handle="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" host="10.0.0.31" Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.094 [INFO][3741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:30.115707 containerd[1466]: 2025-04-30 03:22:30.094 [INFO][3741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.3/26] IPv6=[] ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" HandleID="k8s-pod-network.0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Workload="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.116592 containerd[1466]: 2025-04-30 03:22:30.097 [INFO][3727] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6c96c182-dc13-4414-839a-f8d2a037e8aa", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:30.116592 containerd[1466]: 2025-04-30 03:22:30.098 [INFO][3727] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.3/32] ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.116592 containerd[1466]: 2025-04-30 03:22:30.098 [INFO][3727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.116592 containerd[1466]: 2025-04-30 03:22:30.101 [INFO][3727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.116831 containerd[1466]: 2025-04-30 03:22:30.101 [INFO][3727] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6c96c182-dc13-4414-839a-f8d2a037e8aa", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"be:66:69:a9:0f:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:30.116831 containerd[1466]: 2025-04-30 03:22:30.110 [INFO][3727] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.31-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:22:30.191098 containerd[1466]: time="2025-04-30T03:22:30.190978143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:22:30.191098 containerd[1466]: time="2025-04-30T03:22:30.191046330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:22:30.191098 containerd[1466]: time="2025-04-30T03:22:30.191074313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:30.191385 containerd[1466]: time="2025-04-30T03:22:30.191180793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:30.223919 systemd[1]: Started cri-containerd-0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674.scope - libcontainer container 0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674. Apr 30 03:22:30.238004 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:22:30.262630 containerd[1466]: time="2025-04-30T03:22:30.262566500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6c96c182-dc13-4414-839a-f8d2a037e8aa,Namespace:default,Attempt:0,} returns sandbox id \"0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674\"" Apr 30 03:22:30.265666 containerd[1466]: time="2025-04-30T03:22:30.265637053Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Apr 30 03:22:30.432751 kubelet[1770]: E0430 03:22:30.432688 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:31.433467 kubelet[1770]: E0430 03:22:31.433324 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:32.009929 systemd-networkd[1404]: cali60e51b789ff: Gained IPv6LL Apr 30 03:22:32.434012 kubelet[1770]: E0430 03:22:32.433953 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:32.618662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252251562.mount: Deactivated successfully. Apr 30 03:22:33.434919 kubelet[1770]: E0430 03:22:33.434841 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:34.435762 kubelet[1770]: E0430 03:22:34.435685 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:34.636546 containerd[1466]: time="2025-04-30T03:22:34.636486808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:34.637807 containerd[1466]: time="2025-04-30T03:22:34.637774965Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Apr 30 03:22:34.639295 containerd[1466]: time="2025-04-30T03:22:34.639251457Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:34.644594 containerd[1466]: time="2025-04-30T03:22:34.644546925Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.378872341s" Apr 30 03:22:34.644594 containerd[1466]: time="2025-04-30T03:22:34.644588503Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Apr 30 03:22:34.645418 containerd[1466]: time="2025-04-30T03:22:34.645371001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:34.647179 containerd[1466]: time="2025-04-30T03:22:34.647142826Z" level=info msg="CreateContainer within sandbox \"0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Apr 30 03:22:34.663229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412614961.mount: Deactivated successfully. Apr 30 03:22:34.667600 containerd[1466]: time="2025-04-30T03:22:34.667552898Z" level=info msg="CreateContainer within sandbox \"0a066314e009d919e32a860eaf46b31eccb4a3df8671c0b941c52afe4a861674\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a15c4cf9f78041df9d0743bd4666ba614ab13eb5c19eb517e9427d5ff68d49c3\"" Apr 30 03:22:34.668288 containerd[1466]: time="2025-04-30T03:22:34.668246460Z" level=info msg="StartContainer for \"a15c4cf9f78041df9d0743bd4666ba614ab13eb5c19eb517e9427d5ff68d49c3\"" Apr 30 03:22:34.708950 systemd[1]: Started cri-containerd-a15c4cf9f78041df9d0743bd4666ba614ab13eb5c19eb517e9427d5ff68d49c3.scope - libcontainer container a15c4cf9f78041df9d0743bd4666ba614ab13eb5c19eb517e9427d5ff68d49c3. Apr 30 03:22:34.780526 containerd[1466]: time="2025-04-30T03:22:34.780459095Z" level=info msg="StartContainer for \"a15c4cf9f78041df9d0743bd4666ba614ab13eb5c19eb517e9427d5ff68d49c3\" returns successfully" Apr 30 03:22:35.435911 kubelet[1770]: E0430 03:22:35.435834 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:36.382126 kubelet[1770]: E0430 03:22:36.381951 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:36.436770 kubelet[1770]: E0430 03:22:36.436650 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:37.437274 kubelet[1770]: E0430 03:22:37.437213 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:38.311611 kubelet[1770]: E0430 03:22:38.311572 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:22:38.399214 kubelet[1770]: I0430 03:22:38.399134 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=5.01884881 podStartE2EDuration="9.399076094s" podCreationTimestamp="2025-04-30 03:22:29 +0000 UTC" firstStartedPulling="2025-04-30 03:22:30.265277438 +0000 UTC m=+74.466046157" lastFinishedPulling="2025-04-30 03:22:34.645504722 +0000 UTC m=+78.846273441" observedRunningTime="2025-04-30 03:22:35.356484656 +0000 UTC m=+79.557253375" watchObservedRunningTime="2025-04-30 03:22:38.399076094 +0000 UTC m=+82.599844813" Apr 30 03:22:38.438186 kubelet[1770]: E0430 03:22:38.438117 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:39.438712 kubelet[1770]: E0430 03:22:39.438626 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:40.439560 kubelet[1770]: E0430 03:22:40.439491 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:41.440069 kubelet[1770]: E0430 03:22:41.439977 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:42.440768 kubelet[1770]: E0430 03:22:42.440641 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:43.441507 kubelet[1770]: E0430 03:22:43.441428 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:44.442208 kubelet[1770]: E0430 03:22:44.442116 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:44.530352 systemd[1]: Created slice kubepods-besteffort-pod996b2b95_6ff5_43ab_b991_0176cbe99a81.slice - libcontainer container kubepods-besteffort-pod996b2b95_6ff5_43ab_b991_0176cbe99a81.slice. Apr 30 03:22:44.698992 kubelet[1770]: I0430 03:22:44.698796 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a5584f8-c7b0-4019-a735-2b9760cbb17b\" (UniqueName: \"kubernetes.io/nfs/996b2b95-6ff5-43ab-b991-0176cbe99a81-pvc-9a5584f8-c7b0-4019-a735-2b9760cbb17b\") pod \"test-pod-1\" (UID: \"996b2b95-6ff5-43ab-b991-0176cbe99a81\") " pod="default/test-pod-1" Apr 30 03:22:44.698992 kubelet[1770]: I0430 03:22:44.698899 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tz59\" (UniqueName: \"kubernetes.io/projected/996b2b95-6ff5-43ab-b991-0176cbe99a81-kube-api-access-9tz59\") pod \"test-pod-1\" (UID: \"996b2b95-6ff5-43ab-b991-0176cbe99a81\") " pod="default/test-pod-1" Apr 30 03:22:44.830766 kernel: FS-Cache: Loaded Apr 30 03:22:44.903872 kernel: RPC: Registered named UNIX socket transport module. Apr 30 03:22:44.904035 kernel: RPC: Registered udp transport module. Apr 30 03:22:44.904063 kernel: RPC: Registered tcp transport module. Apr 30 03:22:44.905201 kernel: RPC: Registered tcp-with-tls transport module. Apr 30 03:22:44.905248 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Apr 30 03:22:45.183949 kernel: NFS: Registering the id_resolver key type Apr 30 03:22:45.184141 kernel: Key type id_resolver registered Apr 30 03:22:45.184203 kernel: Key type id_legacy registered Apr 30 03:22:45.234592 nfsidmap[3957]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Apr 30 03:22:45.243027 nfsidmap[3960]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Apr 30 03:22:45.435716 containerd[1466]: time="2025-04-30T03:22:45.435562675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:996b2b95-6ff5-43ab-b991-0176cbe99a81,Namespace:default,Attempt:0,}" Apr 30 03:22:45.443432 kubelet[1770]: E0430 03:22:45.443298 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:45.596259 systemd-networkd[1404]: cali5ec59c6bf6e: Link UP Apr 30 03:22:45.597335 systemd-networkd[1404]: cali5ec59c6bf6e: Gained carrier Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.521 [INFO][3963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.31-k8s-test--pod--1-eth0 default 996b2b95-6ff5-43ab-b991-0176cbe99a81 1381 0 2025-04-30 03:22:29 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.31 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.522 [INFO][3963] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.552 [INFO][3978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" HandleID="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Workload="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.562 [INFO][3978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" HandleID="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Workload="10.0.0.31-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a3b10), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.31", "pod":"test-pod-1", "timestamp":"2025-04-30 03:22:45.552471748 +0000 UTC"}, Hostname:"10.0.0.31", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.562 [INFO][3978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.562 [INFO][3978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.562 [INFO][3978] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.31' Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.564 [INFO][3978] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.569 [INFO][3978] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.573 [INFO][3978] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.575 [INFO][3978] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.577 [INFO][3978] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.577 [INFO][3978] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.578 [INFO][3978] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6 Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.585 [INFO][3978] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.590 [INFO][3978] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.4/26] block=192.168.54.0/26 handle="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.590 [INFO][3978] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.4/26] handle="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" host="10.0.0.31" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.590 [INFO][3978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.590 [INFO][3978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.4/26] IPv6=[] ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" HandleID="k8s-pod-network.9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Workload="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.593 [INFO][3963] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"996b2b95-6ff5-43ab-b991-0176cbe99a81", ResourceVersion:"1381", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:45.607572 containerd[1466]: 2025-04-30 03:22:45.593 [INFO][3963] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.4/32] ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.608413 containerd[1466]: 2025-04-30 03:22:45.593 [INFO][3963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.608413 containerd[1466]: 2025-04-30 03:22:45.596 [INFO][3963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.608413 containerd[1466]: 2025-04-30 03:22:45.597 [INFO][3963] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"996b2b95-6ff5-43ab-b991-0176cbe99a81", ResourceVersion:"1381", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.31", ContainerID:"9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8a:34:8f:42:85:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:22:45.608413 containerd[1466]: 2025-04-30 03:22:45.604 [INFO][3963] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.31-k8s-test--pod--1-eth0" Apr 30 03:22:45.636560 containerd[1466]: time="2025-04-30T03:22:45.636390187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:22:45.636560 containerd[1466]: time="2025-04-30T03:22:45.636490089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:22:45.636560 containerd[1466]: time="2025-04-30T03:22:45.636504086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:45.636813 containerd[1466]: time="2025-04-30T03:22:45.636634587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:22:45.659924 systemd[1]: Started cri-containerd-9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6.scope - libcontainer container 9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6. Apr 30 03:22:45.675180 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:22:45.703321 containerd[1466]: time="2025-04-30T03:22:45.703194577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:996b2b95-6ff5-43ab-b991-0176cbe99a81,Namespace:default,Attempt:0,} returns sandbox id \"9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6\"" Apr 30 03:22:45.705091 containerd[1466]: time="2025-04-30T03:22:45.705051104Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 03:22:46.079557 containerd[1466]: time="2025-04-30T03:22:46.079408290Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:22:46.080333 containerd[1466]: time="2025-04-30T03:22:46.080270140Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Apr 30 03:22:46.083167 containerd[1466]: time="2025-04-30T03:22:46.083131638Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"73306154\" in 378.028133ms" Apr 30 03:22:46.083257 containerd[1466]: time="2025-04-30T03:22:46.083167307Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\"" Apr 30 03:22:46.085117 containerd[1466]: time="2025-04-30T03:22:46.085086310Z" level=info msg="CreateContainer within sandbox \"9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Apr 30 03:22:46.100780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504889145.mount: Deactivated successfully. Apr 30 03:22:46.104644 containerd[1466]: time="2025-04-30T03:22:46.104569866Z" level=info msg="CreateContainer within sandbox \"9aa003b8d909db111f7f247df2b715ac03f62bfc262bbeddbb033eaf0c4eb1e6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ca01890f0a03943cbb502fce9cd6e18c420eec356286bfd5153332e0d49a47dc\"" Apr 30 03:22:46.105265 containerd[1466]: time="2025-04-30T03:22:46.105225217Z" level=info msg="StartContainer for \"ca01890f0a03943cbb502fce9cd6e18c420eec356286bfd5153332e0d49a47dc\"" Apr 30 03:22:46.142913 systemd[1]: Started cri-containerd-ca01890f0a03943cbb502fce9cd6e18c420eec356286bfd5153332e0d49a47dc.scope - libcontainer container ca01890f0a03943cbb502fce9cd6e18c420eec356286bfd5153332e0d49a47dc. Apr 30 03:22:46.173339 containerd[1466]: time="2025-04-30T03:22:46.173285512Z" level=info msg="StartContainer for \"ca01890f0a03943cbb502fce9cd6e18c420eec356286bfd5153332e0d49a47dc\" returns successfully" Apr 30 03:22:46.316080 kubelet[1770]: I0430 03:22:46.316017 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.936760197 podStartE2EDuration="17.31599814s" podCreationTimestamp="2025-04-30 03:22:29 +0000 UTC" firstStartedPulling="2025-04-30 03:22:45.704627738 +0000 UTC m=+89.905396457" lastFinishedPulling="2025-04-30 03:22:46.083865681 +0000 UTC m=+90.284634400" observedRunningTime="2025-04-30 03:22:46.315433825 +0000 UTC m=+90.516202554" watchObservedRunningTime="2025-04-30 03:22:46.31599814 +0000 UTC m=+90.516766859" Apr 30 03:22:46.443692 kubelet[1770]: E0430 03:22:46.443603 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:46.730014 systemd-networkd[1404]: cali5ec59c6bf6e: Gained IPv6LL Apr 30 03:22:47.444094 kubelet[1770]: E0430 03:22:47.444009 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:48.445020 kubelet[1770]: E0430 03:22:48.444902 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:49.445889 kubelet[1770]: E0430 03:22:49.445720 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:50.446401 kubelet[1770]: E0430 03:22:50.446319 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:51.446774 kubelet[1770]: E0430 03:22:51.446659 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:52.447105 kubelet[1770]: E0430 03:22:52.447010 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:22:53.080931 kubelet[1770]: E0430 03:22:53.080863 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"