May 9 00:29:25.908468 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:29:25.908488 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:29:25.908500 kernel: BIOS-provided physical RAM map: May 9 00:29:25.908507 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:29:25.908513 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:29:25.908519 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:29:25.908526 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:29:25.908533 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:29:25.908539 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 9 00:29:25.908546 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 9 00:29:25.908555 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 9 00:29:25.908561 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 9 00:29:25.908572 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 9 00:29:25.908578 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 9 00:29:25.908589 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 9 00:29:25.908596 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:29:25.908605 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 9 00:29:25.908612 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 9 00:29:25.908619 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:29:25.908626 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:29:25.908633 kernel: NX (Execute Disable) protection: active May 9 00:29:25.908640 kernel: APIC: Static calls initialized May 9 00:29:25.908646 kernel: efi: EFI v2.7 by EDK II May 9 00:29:25.908653 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 9 00:29:25.908660 kernel: SMBIOS 2.8 present. May 9 00:29:25.908667 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 9 00:29:25.908674 kernel: Hypervisor detected: KVM May 9 00:29:25.908683 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:29:25.908690 kernel: kvm-clock: using sched offset of 5279140067 cycles May 9 00:29:25.908698 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:29:25.908705 kernel: tsc: Detected 2794.748 MHz processor May 9 00:29:25.908712 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:29:25.908720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:29:25.908727 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 9 00:29:25.908734 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:29:25.908741 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:29:25.908751 kernel: Using GB pages for direct mapping May 9 00:29:25.908758 kernel: Secure boot disabled May 9 00:29:25.908765 kernel: ACPI: Early table checksum verification disabled May 9 00:29:25.908772 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:29:25.908783 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:29:25.908791 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:29:25.908798 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:29:25.908808 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:29:25.908815 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:29:25.908825 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:29:25.908833 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:29:25.908840 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:29:25.908848 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:29:25.908855 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:29:25.908866 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:29:25.908873 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:29:25.908880 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:29:25.908887 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:29:25.908895 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:29:25.908902 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:29:25.908909 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:29:25.908916 kernel: No NUMA configuration found May 9 00:29:25.908926 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 9 00:29:25.908936 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 9 00:29:25.908944 kernel: Zone ranges: May 9 00:29:25.908951 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:29:25.908958 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 9 00:29:25.908966 kernel: Normal empty May 9 00:29:25.908973 kernel: Movable zone start for each node May 9 00:29:25.908980 kernel: Early memory node ranges May 9 00:29:25.908987 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:29:25.908995 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:29:25.909002 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:29:25.909012 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 9 00:29:25.909019 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 9 00:29:25.909026 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 9 00:29:25.909036 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 9 00:29:25.909043 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:29:25.909051 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:29:25.909058 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:29:25.909065 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:29:25.909073 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 9 00:29:25.909083 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:29:25.909091 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 9 00:29:25.909098 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:29:25.909105 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:29:25.909113 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:29:25.909120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:29:25.909127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:29:25.909135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:29:25.909142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:29:25.909152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:29:25.909160 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:29:25.909167 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:29:25.909174 kernel: TSC deadline timer available May 9 00:29:25.909182 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:29:25.909189 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:29:25.909196 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:29:25.909204 kernel: kvm-guest: setup PV sched yield May 9 00:29:25.909211 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 9 00:29:25.909221 kernel: Booting paravirtualized kernel on KVM May 9 00:29:25.909228 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:29:25.909236 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:29:25.909243 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:29:25.909250 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:29:25.909258 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:29:25.909265 kernel: kvm-guest: PV spinlocks enabled May 9 00:29:25.909272 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:29:25.909281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:29:25.909319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:29:25.909326 kernel: random: crng init done May 9 00:29:25.909334 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:29:25.909341 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:29:25.909349 kernel: Fallback order for Node 0: 0 May 9 00:29:25.909356 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 9 00:29:25.909363 kernel: Policy zone: DMA32 May 9 00:29:25.909371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:29:25.909381 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 9 00:29:25.909388 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:29:25.909396 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:29:25.909403 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:29:25.909411 kernel: Dynamic Preempt: voluntary May 9 00:29:25.909439 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:29:25.909450 kernel: rcu: RCU event tracing is enabled. May 9 00:29:25.909458 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:29:25.909465 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:29:25.909473 kernel: Rude variant of Tasks RCU enabled. May 9 00:29:25.909481 kernel: Tracing variant of Tasks RCU enabled. May 9 00:29:25.909489 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:29:25.909499 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:29:25.909506 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:29:25.909517 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:29:25.909524 kernel: Console: colour dummy device 80x25 May 9 00:29:25.909532 kernel: printk: console [ttyS0] enabled May 9 00:29:25.909542 kernel: ACPI: Core revision 20230628 May 9 00:29:25.909550 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:29:25.909558 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:29:25.909566 kernel: x2apic enabled May 9 00:29:25.909573 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:29:25.909581 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:29:25.909589 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:29:25.909596 kernel: kvm-guest: setup PV IPIs May 9 00:29:25.909604 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:29:25.909615 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:29:25.909623 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:29:25.909630 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:29:25.909638 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:29:25.909645 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:29:25.909653 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:29:25.909661 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:29:25.909669 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:29:25.909677 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:29:25.909687 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:29:25.909695 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:29:25.909703 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:29:25.909711 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:29:25.909721 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:29:25.909729 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:29:25.909737 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:29:25.909745 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:29:25.909755 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:29:25.909763 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:29:25.909771 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:29:25.909779 kernel: Freeing SMP alternatives memory: 32K May 9 00:29:25.909786 kernel: pid_max: default: 32768 minimum: 301 May 9 00:29:25.909794 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:29:25.909802 kernel: landlock: Up and running. May 9 00:29:25.909809 kernel: SELinux: Initializing. May 9 00:29:25.909817 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:29:25.909827 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:29:25.909835 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:29:25.909843 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:29:25.909851 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:29:25.909859 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:29:25.909866 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:29:25.909874 kernel: ... version: 0 May 9 00:29:25.909882 kernel: ... bit width: 48 May 9 00:29:25.909889 kernel: ... generic registers: 6 May 9 00:29:25.909900 kernel: ... value mask: 0000ffffffffffff May 9 00:29:25.909910 kernel: ... max period: 00007fffffffffff May 9 00:29:25.909918 kernel: ... fixed-purpose events: 0 May 9 00:29:25.909927 kernel: ... event mask: 000000000000003f May 9 00:29:25.909935 kernel: signal: max sigframe size: 1776 May 9 00:29:25.909943 kernel: rcu: Hierarchical SRCU implementation. May 9 00:29:25.909951 kernel: rcu: Max phase no-delay instances is 400. May 9 00:29:25.909959 kernel: smp: Bringing up secondary CPUs ... May 9 00:29:25.909966 kernel: smpboot: x86: Booting SMP configuration: May 9 00:29:25.909977 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:29:25.909985 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:29:25.909992 kernel: smpboot: Max logical packages: 1 May 9 00:29:25.910002 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:29:25.910010 kernel: devtmpfs: initialized May 9 00:29:25.910018 kernel: x86/mm: Memory block size: 128MB May 9 00:29:25.910025 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:29:25.910033 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:29:25.910041 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 9 00:29:25.910052 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:29:25.910060 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:29:25.910067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:29:25.910075 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:29:25.910083 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:29:25.910090 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:29:25.910098 kernel: audit: initializing netlink subsys (disabled) May 9 00:29:25.910106 kernel: audit: type=2000 audit(1746750564.877:1): state=initialized audit_enabled=0 res=1 May 9 00:29:25.910114 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:29:25.910124 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:29:25.910132 kernel: cpuidle: using governor menu May 9 00:29:25.910139 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:29:25.910147 kernel: dca service started, version 1.12.1 May 9 00:29:25.910155 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:29:25.910162 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:29:25.910170 kernel: PCI: Using configuration type 1 for base access May 9 00:29:25.910178 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:29:25.910186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:29:25.910196 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:29:25.910204 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:29:25.910211 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:29:25.910219 kernel: ACPI: Added _OSI(Module Device) May 9 00:29:25.910226 kernel: ACPI: Added _OSI(Processor Device) May 9 00:29:25.910234 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:29:25.910242 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:29:25.910249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:29:25.910257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:29:25.910267 kernel: ACPI: Interpreter enabled May 9 00:29:25.910275 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:29:25.910290 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:29:25.910301 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:29:25.910310 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:29:25.910318 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:29:25.910326 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:29:25.910899 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:29:25.911049 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:29:25.911180 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:29:25.911191 kernel: PCI host bridge to bus 0000:00 May 9 00:29:25.911345 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:29:25.911482 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:29:25.911602 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:29:25.911718 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:29:25.911840 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:29:25.911957 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 9 00:29:25.912074 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:29:25.912244 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:29:25.912410 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:29:25.912654 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:29:25.912787 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:29:25.912919 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:29:25.913069 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:29:25.913195 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:29:25.913346 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:29:25.913490 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:29:25.913618 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:29:25.913751 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 9 00:29:25.913894 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:29:25.914022 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:29:25.914147 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:29:25.914273 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 9 00:29:25.914456 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:29:25.914606 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:29:25.914743 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:29:25.914869 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 9 00:29:25.914995 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:29:25.915147 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:29:25.915280 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:29:25.915448 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:29:25.915578 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:29:25.915712 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:29:25.915857 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:29:25.915991 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:29:25.916001 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:29:25.916009 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:29:25.916017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:29:25.916025 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:29:25.916037 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:29:25.916044 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:29:25.916052 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:29:25.916060 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:29:25.916068 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:29:25.916075 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:29:25.916083 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:29:25.916091 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:29:25.916098 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:29:25.916109 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:29:25.916117 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:29:25.916124 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:29:25.916132 kernel: iommu: Default domain type: Translated May 9 00:29:25.916140 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:29:25.916148 kernel: efivars: Registered efivars operations May 9 00:29:25.916155 kernel: PCI: Using ACPI for IRQ routing May 9 00:29:25.916163 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:29:25.916171 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:29:25.916178 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 9 00:29:25.916188 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 9 00:29:25.916196 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 9 00:29:25.916333 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:29:25.916477 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:29:25.916606 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:29:25.916616 kernel: vgaarb: loaded May 9 00:29:25.916624 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:29:25.916632 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:29:25.916644 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:29:25.916652 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:29:25.916660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:29:25.916668 kernel: pnp: PnP ACPI init May 9 00:29:25.916839 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:29:25.916851 kernel: pnp: PnP ACPI: found 6 devices May 9 00:29:25.916859 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:29:25.916867 kernel: NET: Registered PF_INET protocol family May 9 00:29:25.916880 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:29:25.916888 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:29:25.916896 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:29:25.916904 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:29:25.916912 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:29:25.916920 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:29:25.916928 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:29:25.916937 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:29:25.916945 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:29:25.916955 kernel: NET: Registered PF_XDP protocol family May 9 00:29:25.917085 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:29:25.917212 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:29:25.917339 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:29:25.917508 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:29:25.917625 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:29:25.917738 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:29:25.917850 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:29:25.917970 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 9 00:29:25.917980 kernel: PCI: CLS 0 bytes, default 64 May 9 00:29:25.917988 kernel: Initialise system trusted keyrings May 9 00:29:25.917996 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:29:25.918003 kernel: Key type asymmetric registered May 9 00:29:25.918011 kernel: Asymmetric key parser 'x509' registered May 9 00:29:25.918019 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:29:25.918026 kernel: io scheduler mq-deadline registered May 9 00:29:25.918034 kernel: io scheduler kyber registered May 9 00:29:25.918045 kernel: io scheduler bfq registered May 9 00:29:25.918053 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:29:25.918061 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:29:25.918069 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:29:25.918077 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:29:25.918085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:29:25.918093 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:29:25.918101 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:29:25.918108 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:29:25.918119 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:29:25.918253 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:29:25.918265 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:29:25.918396 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:29:25.918594 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:29:25 UTC (1746750565) May 9 00:29:25.918715 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:29:25.918726 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:29:25.918734 kernel: efifb: probing for efifb May 9 00:29:25.918746 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 9 00:29:25.918754 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 9 00:29:25.918762 kernel: efifb: scrolling: redraw May 9 00:29:25.918770 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 9 00:29:25.918777 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:29:25.918785 kernel: fb0: EFI VGA frame buffer device May 9 00:29:25.918810 kernel: pstore: Using crash dump compression: deflate May 9 00:29:25.918821 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:29:25.918829 kernel: NET: Registered PF_INET6 protocol family May 9 00:29:25.918839 kernel: Segment Routing with IPv6 May 9 00:29:25.918847 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:29:25.918855 kernel: NET: Registered PF_PACKET protocol family May 9 00:29:25.918863 kernel: Key type dns_resolver registered May 9 00:29:25.918871 kernel: IPI shorthand broadcast: enabled May 9 00:29:25.918879 kernel: sched_clock: Marking stable (1034003292, 115230865)->(1206690186, -57456029) May 9 00:29:25.918887 kernel: registered taskstats version 1 May 9 00:29:25.918895 kernel: Loading compiled-in X.509 certificates May 9 00:29:25.918903 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:29:25.918914 kernel: Key type .fscrypt registered May 9 00:29:25.918924 kernel: Key type fscrypt-provisioning registered May 9 00:29:25.918932 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:29:25.918943 kernel: ima: Allocated hash algorithm: sha1 May 9 00:29:25.918952 kernel: ima: No architecture policies found May 9 00:29:25.918959 kernel: clk: Disabling unused clocks May 9 00:29:25.918968 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:29:25.918976 kernel: Write protecting the kernel read-only data: 36864k May 9 00:29:25.918986 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:29:25.918994 kernel: Run /init as init process May 9 00:29:25.919003 kernel: with arguments: May 9 00:29:25.919011 kernel: /init May 9 00:29:25.919019 kernel: with environment: May 9 00:29:25.919026 kernel: HOME=/ May 9 00:29:25.919034 kernel: TERM=linux May 9 00:29:25.919042 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:29:25.919053 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:29:25.919065 systemd[1]: Detected virtualization kvm. May 9 00:29:25.919074 systemd[1]: Detected architecture x86-64. May 9 00:29:25.919082 systemd[1]: Running in initrd. May 9 00:29:25.919091 systemd[1]: No hostname configured, using default hostname. May 9 00:29:25.919104 systemd[1]: Hostname set to . May 9 00:29:25.919112 systemd[1]: Initializing machine ID from VM UUID. May 9 00:29:25.919121 systemd[1]: Queued start job for default target initrd.target. May 9 00:29:25.919130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:29:25.919138 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:29:25.919148 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:29:25.919156 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:29:25.919165 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:29:25.919176 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:29:25.919187 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:29:25.919195 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:29:25.919204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:29:25.919212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:29:25.919221 systemd[1]: Reached target paths.target - Path Units. May 9 00:29:25.919229 systemd[1]: Reached target slices.target - Slice Units. May 9 00:29:25.919241 systemd[1]: Reached target swap.target - Swaps. May 9 00:29:25.919249 systemd[1]: Reached target timers.target - Timer Units. May 9 00:29:25.919257 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:29:25.919266 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:29:25.919275 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:29:25.919293 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:29:25.919302 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:29:25.919311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:29:25.919319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:29:25.919331 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:29:25.919339 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:29:25.919348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:29:25.919357 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:29:25.919365 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:29:25.919373 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:29:25.919382 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:29:25.919390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:29:25.919401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:29:25.919410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:29:25.919432 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:29:25.919442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:29:25.919470 systemd-journald[193]: Collecting audit messages is disabled. May 9 00:29:25.919492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:29:25.919501 systemd-journald[193]: Journal started May 9 00:29:25.919522 systemd-journald[193]: Runtime Journal (/run/log/journal/607e93cd0bba4a869345a39bea427562) is 6.0M, max 48.3M, 42.2M free. May 9 00:29:25.922492 systemd-modules-load[194]: Inserted module 'overlay' May 9 00:29:25.924438 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:29:25.927463 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:29:25.928366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:25.934331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:29:25.937563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:29:25.940293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:29:25.952269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:29:25.954231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:29:25.956272 systemd-modules-load[194]: Inserted module 'br_netfilter' May 9 00:29:25.956736 kernel: Bridge firewalling registered May 9 00:29:25.957912 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:29:25.959822 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:29:25.968594 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:29:25.970712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:29:25.982209 dracut-cmdline[223]: dracut-dracut-053 May 9 00:29:25.985501 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:29:25.991488 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:29:26.001623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:29:26.032883 systemd-resolved[248]: Positive Trust Anchors: May 9 00:29:26.032909 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:29:26.032942 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:29:26.035799 systemd-resolved[248]: Defaulting to hostname 'linux'. May 9 00:29:26.037188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:29:26.042979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:29:26.075458 kernel: SCSI subsystem initialized May 9 00:29:26.087445 kernel: Loading iSCSI transport class v2.0-870. May 9 00:29:26.097450 kernel: iscsi: registered transport (tcp) May 9 00:29:26.118449 kernel: iscsi: registered transport (qla4xxx) May 9 00:29:26.118475 kernel: QLogic iSCSI HBA Driver May 9 00:29:26.176996 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:29:26.191713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:29:26.218265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:29:26.218367 kernel: device-mapper: uevent: version 1.0.3 May 9 00:29:26.218400 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:29:26.263459 kernel: raid6: avx2x4 gen() 28250 MB/s May 9 00:29:26.280438 kernel: raid6: avx2x2 gen() 30361 MB/s May 9 00:29:26.297544 kernel: raid6: avx2x1 gen() 26036 MB/s May 9 00:29:26.297571 kernel: raid6: using algorithm avx2x2 gen() 30361 MB/s May 9 00:29:26.315537 kernel: raid6: .... xor() 19998 MB/s, rmw enabled May 9 00:29:26.315558 kernel: raid6: using avx2x2 recovery algorithm May 9 00:29:26.337443 kernel: xor: automatically using best checksumming function avx May 9 00:29:26.491455 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:29:26.503747 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:29:26.515624 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:29:26.527861 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 9 00:29:26.532607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:29:26.542559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:29:26.555444 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 9 00:29:26.586178 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:29:26.607575 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:29:26.675979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:29:26.690609 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:29:26.703643 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:29:26.706489 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:29:26.716848 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:29:26.717118 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:29:26.717152 kernel: GPT:9289727 != 19775487 May 9 00:29:26.717179 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:29:26.717216 kernel: GPT:9289727 != 19775487 May 9 00:29:26.717242 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:29:26.717258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:29:26.710364 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:29:26.718918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:29:26.722208 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:29:26.726037 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:29:26.737631 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:29:26.743438 kernel: libata version 3.00 loaded. May 9 00:29:26.750077 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:29:26.750332 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:29:26.745684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:29:26.761920 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:29:26.762162 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:29:26.762363 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:29:26.762380 kernel: AES CTR mode by8 optimization enabled May 9 00:29:26.745904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:29:26.747683 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:29:26.771539 kernel: scsi host0: ahci May 9 00:29:26.771807 kernel: scsi host1: ahci May 9 00:29:26.749127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:29:26.749371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:26.781328 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) May 9 00:29:26.781356 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (472) May 9 00:29:26.754102 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:29:26.784273 kernel: scsi host2: ahci May 9 00:29:26.784484 kernel: scsi host3: ahci May 9 00:29:26.784671 kernel: scsi host4: ahci May 9 00:29:26.764901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:29:26.771291 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:29:26.792846 kernel: scsi host5: ahci May 9 00:29:26.793022 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:29:26.793035 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:29:26.793045 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:29:26.793056 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:29:26.793066 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:29:26.793082 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:29:26.794964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:26.803080 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:29:26.817565 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:29:26.826305 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:29:26.827588 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:29:26.834543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:29:26.845539 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:29:26.846693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:29:26.846752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:26.849235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:29:26.852662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:29:26.857462 disk-uuid[554]: Primary Header is updated. May 9 00:29:26.857462 disk-uuid[554]: Secondary Entries is updated. May 9 00:29:26.857462 disk-uuid[554]: Secondary Header is updated. May 9 00:29:26.861044 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:29:26.863452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:29:26.870672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:26.877955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:29:26.910399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:29:27.098471 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:29:27.098556 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:29:27.098567 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:29:27.099977 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:29:27.099999 kernel: ata3.00: applying bridge limits May 9 00:29:27.101438 kernel: ata3.00: configured for UDMA/100 May 9 00:29:27.103449 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:29:27.106443 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:29:27.106469 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:29:27.107450 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:29:27.149469 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:29:27.149736 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:29:27.163461 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:29:27.866085 disk-uuid[556]: The operation has completed successfully. May 9 00:29:27.867389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:29:27.892468 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:29:27.892621 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:29:27.925604 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:29:27.929151 sh[598]: Success May 9 00:29:27.942453 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:29:27.974289 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:29:27.983017 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:29:27.987470 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:29:27.997111 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:29:27.997147 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:29:27.997163 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:29:27.998166 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:29:27.999589 kernel: BTRFS info (device dm-0): using free space tree May 9 00:29:28.003571 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:29:28.005036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:29:28.014568 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:29:28.016368 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:29:28.026105 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:29:28.026142 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:29:28.026153 kernel: BTRFS info (device vda6): using free space tree May 9 00:29:28.029446 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:29:28.038544 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:29:28.040431 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:29:28.049013 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:29:28.057578 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:29:28.113309 ignition[689]: Ignition 2.19.0 May 9 00:29:28.113321 ignition[689]: Stage: fetch-offline May 9 00:29:28.113357 ignition[689]: no configs at "/usr/lib/ignition/base.d" May 9 00:29:28.113367 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:29:28.113484 ignition[689]: parsed url from cmdline: "" May 9 00:29:28.113488 ignition[689]: no config URL provided May 9 00:29:28.113493 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:29:28.113503 ignition[689]: no config at "/usr/lib/ignition/user.ign" May 9 00:29:28.113529 ignition[689]: op(1): [started] loading QEMU firmware config module May 9 00:29:28.113534 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:29:28.121844 ignition[689]: op(1): [finished] loading QEMU firmware config module May 9 00:29:28.124407 ignition[689]: parsing config with SHA512: 250fdcd66b706263e7f1a4fcd92d87fb84731f20a25add91c7d5bb076e0c6507b49a842ad3549d89d88c797c44e8d715106a6d33a5deec8cde9b5a6a8d6963dd May 9 00:29:28.126784 unknown[689]: fetched base config from "system" May 9 00:29:28.127322 unknown[689]: fetched user config from "qemu" May 9 00:29:28.127611 ignition[689]: fetch-offline: fetch-offline passed May 9 00:29:28.127675 ignition[689]: Ignition finished successfully May 9 00:29:28.129683 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:29:28.147964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:29:28.164648 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:29:28.185584 systemd-networkd[787]: lo: Link UP May 9 00:29:28.185593 systemd-networkd[787]: lo: Gained carrier May 9 00:29:28.187290 systemd-networkd[787]: Enumeration completed May 9 00:29:28.187396 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:29:28.187695 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:29:28.187699 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:29:28.188968 systemd-networkd[787]: eth0: Link UP May 9 00:29:28.188972 systemd-networkd[787]: eth0: Gained carrier May 9 00:29:28.188979 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:29:28.189701 systemd[1]: Reached target network.target - Network. May 9 00:29:28.191501 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:29:28.201595 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:29:28.209487 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:29:28.217069 ignition[789]: Ignition 2.19.0 May 9 00:29:28.217080 ignition[789]: Stage: kargs May 9 00:29:28.217247 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 9 00:29:28.217258 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:29:28.217888 ignition[789]: kargs: kargs passed May 9 00:29:28.217932 ignition[789]: Ignition finished successfully May 9 00:29:28.224972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:29:28.236620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:29:28.249698 ignition[798]: Ignition 2.19.0 May 9 00:29:28.249709 ignition[798]: Stage: disks May 9 00:29:28.249899 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 9 00:29:28.249910 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:29:28.250547 ignition[798]: disks: disks passed May 9 00:29:28.250593 ignition[798]: Ignition finished successfully May 9 00:29:28.256521 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:29:28.258769 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:29:28.260911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:29:28.263268 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:29:28.265200 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:29:28.265700 systemd[1]: Reached target basic.target - Basic System. May 9 00:29:28.279578 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:29:28.290901 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.53 May 9 00:29:28.290915 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 9 00:29:28.293186 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:29:28.300441 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:29:28.304406 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:29:28.395451 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:29:28.396260 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:29:28.397401 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:29:28.417618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:29:28.419844 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:29:28.421434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:29:28.427408 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 9 00:29:28.427453 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:29:28.421483 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:29:28.434473 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:29:28.434492 kernel: BTRFS info (device vda6): using free space tree May 9 00:29:28.434504 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:29:28.421513 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:29:28.431203 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:29:28.436112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:29:28.454664 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:29:28.490560 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:29:28.495665 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 9 00:29:28.499375 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:29:28.503068 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:29:28.583982 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:29:28.593525 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:29:28.594780 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:29:28.605446 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:29:28.619753 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:29:28.632558 ignition[932]: INFO : Ignition 2.19.0 May 9 00:29:28.632558 ignition[932]: INFO : Stage: mount May 9 00:29:28.634210 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:29:28.634210 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:29:28.634210 ignition[932]: INFO : mount: mount passed May 9 00:29:28.634210 ignition[932]: INFO : Ignition finished successfully May 9 00:29:28.638153 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:29:28.649514 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:29:28.996557 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:29:29.005659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:29:29.013388 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) May 9 00:29:29.013436 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:29:29.013448 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:29:29.014896 kernel: BTRFS info (device vda6): using free space tree May 9 00:29:29.017465 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:29:29.018889 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:29:29.041611 ignition[961]: INFO : Ignition 2.19.0 May 9 00:29:29.041611 ignition[961]: INFO : Stage: files May 9 00:29:29.043428 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:29:29.043428 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:29:29.043428 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 9 00:29:29.047047 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:29:29.047047 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:29:29.047047 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:29:29.047047 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:29:29.047047 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:29:29.046962 unknown[961]: wrote ssh authorized keys file for user: core May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:29:29.055286 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 9 00:29:29.394356 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 9 00:29:29.404562 systemd-networkd[787]: eth0: Gained IPv6LL May 9 00:29:29.793185 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:29:29.793185 ignition[961]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 9 00:29:29.796980 ignition[961]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:29:29.796980 ignition[961]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:29:29.796980 ignition[961]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 9 00:29:29.796980 ignition[961]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:29:29.814987 ignition[961]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:29:29.820308 ignition[961]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:29:29.821894 ignition[961]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:29:29.821894 ignition[961]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:29:29.821894 ignition[961]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:29:29.821894 ignition[961]: INFO : files: files passed May 9 00:29:29.821894 ignition[961]: INFO : Ignition finished successfully May 9 00:29:29.823276 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:29:29.842585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:29:29.843579 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:29:29.846636 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:29:29.846751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:29:29.853880 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:29:29.856520 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:29:29.858321 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:29:29.859947 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:29:29.863030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:29:29.864530 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:29:29.877567 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:29:29.901538 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:29:29.901664 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:29:29.902480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:29:29.905338 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:29:29.905870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:29:29.912536 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:29:29.928739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:29:29.935583 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:29:29.946123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:29:29.947461 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:29:29.949729 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:29:29.951752 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:29:29.951865 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:29:29.954075 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:29:29.955819 systemd[1]: Stopped target basic.target - Basic System. May 9 00:29:29.957858 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:29:29.959902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:29:29.962082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:29:29.964294 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:29:29.966475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:29:29.968808 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:29:29.970867 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:29:29.973107 systemd[1]: Stopped target swap.target - Swaps. May 9 00:29:29.974927 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:29:29.975093 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:29:29.977250 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:29:29.978948 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:29:29.981077 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:29:29.981168 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:29:29.983293 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:29:29.983407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:29:29.985608 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:29:29.985721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:29:29.987755 systemd[1]: Stopped target paths.target - Path Units. May 9 00:29:29.989503 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:29:29.992481 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:29:29.994121 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:29:29.996081 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:29:29.998149 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:29:29.998254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:29:30.000140 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:29:30.000241 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:29:30.002202 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:29:30.002317 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:29:30.004916 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:29:30.005025 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:29:30.015565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:29:30.017144 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:29:30.018253 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:29:30.018374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:29:30.020574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:29:30.020792 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:29:30.027846 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:29:30.028002 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:29:30.031216 ignition[1015]: INFO : Ignition 2.19.0 May 9 00:29:30.031216 ignition[1015]: INFO : Stage: umount May 9 00:29:30.031216 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:29:30.031216 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:29:30.031216 ignition[1015]: INFO : umount: umount passed May 9 00:29:30.031216 ignition[1015]: INFO : Ignition finished successfully May 9 00:29:30.030988 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:29:30.031115 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:29:30.031928 systemd[1]: Stopped target network.target - Network. May 9 00:29:30.033202 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:29:30.033257 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:29:30.033760 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:29:30.033809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:29:30.036720 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:29:30.036769 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:29:30.038801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:29:30.038853 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:29:30.039240 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:29:30.042211 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:29:30.047399 systemd-networkd[787]: eth0: DHCPv6 lease lost May 9 00:29:30.051103 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:29:30.051275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:29:30.054950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:29:30.055031 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:29:30.060882 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:29:30.061027 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:29:30.062551 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:29:30.062605 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:29:30.079601 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:29:30.080034 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:29:30.080091 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:29:30.080433 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:29:30.080483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:29:30.080922 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:29:30.080967 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:29:30.081408 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:29:30.083617 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:29:30.094891 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:29:30.095043 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:29:30.103312 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:29:30.103525 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:29:30.105848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:29:30.105902 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:29:30.107952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:29:30.107992 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:29:30.110007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:29:30.110058 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:29:30.112177 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:29:30.112235 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:29:30.114348 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:29:30.114399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:29:30.123602 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:29:30.124750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:29:30.124810 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:29:30.127157 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:29:30.127219 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:29:30.129456 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:29:30.129509 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:29:30.131933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:29:30.131981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:30.134550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:29:30.134667 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:29:30.237317 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:29:30.237500 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:29:30.240365 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:29:30.242097 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:29:30.242172 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:29:30.261635 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:29:30.269018 systemd[1]: Switching root. May 9 00:29:30.305290 systemd-journald[193]: Journal stopped May 9 00:29:31.527770 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 9 00:29:31.527890 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:29:31.527910 kernel: SELinux: policy capability open_perms=1 May 9 00:29:31.527924 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:29:31.527937 kernel: SELinux: policy capability always_check_network=0 May 9 00:29:31.527961 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:29:31.527991 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:29:31.528014 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:29:31.528028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:29:31.528047 kernel: audit: type=1403 audit(1746750570.698:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:29:31.528062 systemd[1]: Successfully loaded SELinux policy in 43.031ms. May 9 00:29:31.528096 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.913ms. May 9 00:29:31.528114 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:29:31.528139 systemd[1]: Detected virtualization kvm. May 9 00:29:31.528167 systemd[1]: Detected architecture x86-64. May 9 00:29:31.528186 systemd[1]: Detected first boot. May 9 00:29:31.528198 systemd[1]: Initializing machine ID from VM UUID. May 9 00:29:31.528213 zram_generator::config[1059]: No configuration found. May 9 00:29:31.528230 systemd[1]: Populated /etc with preset unit settings. May 9 00:29:31.528242 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:29:31.528257 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:29:31.528269 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:29:31.528284 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:29:31.528296 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:29:31.528316 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:29:31.528328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:29:31.528342 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:29:31.528355 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:29:31.528369 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:29:31.528381 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:29:31.528393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:29:31.528412 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:29:31.528512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:29:31.528535 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:29:31.528553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:29:31.528576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:29:31.528603 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:29:31.528616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:29:31.528645 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:29:31.528671 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:29:31.528696 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:29:31.528739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:29:31.528767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:29:31.528793 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:29:31.528813 systemd[1]: Reached target slices.target - Slice Units. May 9 00:29:31.528829 systemd[1]: Reached target swap.target - Swaps. May 9 00:29:31.528843 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:29:31.528857 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:29:31.528869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:29:31.528888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:29:31.528900 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:29:31.528915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:29:31.528929 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:29:31.528945 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:29:31.528970 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:29:31.528996 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:31.529022 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:29:31.529046 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:29:31.529079 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:29:31.529101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:29:31.529123 systemd[1]: Reached target machines.target - Containers. May 9 00:29:31.529142 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:29:31.529162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:29:31.529175 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:29:31.529188 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:29:31.529200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:29:31.529229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:29:31.529242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:29:31.529259 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:29:31.529271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:29:31.529284 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:29:31.529296 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:29:31.529311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:29:31.529337 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:29:31.529362 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:29:31.529389 kernel: fuse: init (API version 7.39) May 9 00:29:31.529415 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:29:31.529448 kernel: loop: module loaded May 9 00:29:31.529479 systemd-journald[1126]: Collecting audit messages is disabled. May 9 00:29:31.529512 kernel: ACPI: bus type drm_connector registered May 9 00:29:31.529526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:29:31.529538 systemd-journald[1126]: Journal started May 9 00:29:31.529566 systemd-journald[1126]: Runtime Journal (/run/log/journal/607e93cd0bba4a869345a39bea427562) is 6.0M, max 48.3M, 42.2M free. May 9 00:29:31.282113 systemd[1]: Queued start job for default target multi-user.target. May 9 00:29:31.308515 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:29:31.309063 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:29:31.541912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:29:31.545511 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:29:31.550458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:29:31.553021 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:29:31.553059 systemd[1]: Stopped verity-setup.service. May 9 00:29:31.556671 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:31.560561 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:29:31.562800 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:29:31.564261 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:29:31.565667 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:29:31.566942 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:29:31.568284 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:29:31.569917 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:29:31.571443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:29:31.573140 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:29:31.573366 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:29:31.575848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:29:31.576060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:29:31.578184 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:29:31.578374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:29:31.579844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:29:31.580044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:29:31.581740 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:29:31.581918 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:29:31.583348 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:29:31.583532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:29:31.585217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:29:31.586898 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:29:31.588653 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:29:31.606269 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:29:31.611060 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:29:31.623595 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:29:31.626246 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:29:31.627436 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:29:31.627465 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:29:31.629503 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:29:31.631909 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:29:31.637585 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:29:31.638929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:29:31.641511 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:29:31.646184 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:29:31.647594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:29:31.648886 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:29:31.650099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:29:31.659391 systemd-journald[1126]: Time spent on flushing to /var/log/journal/607e93cd0bba4a869345a39bea427562 is 31.338ms for 979 entries. May 9 00:29:31.659391 systemd-journald[1126]: System Journal (/var/log/journal/607e93cd0bba4a869345a39bea427562) is 8.0M, max 195.6M, 187.6M free. May 9 00:29:31.740756 systemd-journald[1126]: Received client request to flush runtime journal. May 9 00:29:31.740818 kernel: loop0: detected capacity change from 0 to 205544 May 9 00:29:31.740842 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:29:31.653581 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:29:31.658091 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:29:31.662262 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:29:31.666155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:29:31.667691 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:29:31.680975 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:29:31.687345 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:29:31.693281 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:29:31.697246 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:29:31.709727 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:29:31.712963 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:29:31.728373 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:29:31.729519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:29:31.737099 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 9 00:29:31.737113 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 9 00:29:31.742337 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:29:31.745854 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:29:31.757739 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:29:31.759816 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:29:31.760592 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:29:31.761530 kernel: loop1: detected capacity change from 0 to 140768 May 9 00:29:31.792135 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:29:31.805097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:29:31.810441 kernel: loop2: detected capacity change from 0 to 142488 May 9 00:29:31.834737 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 9 00:29:31.834758 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 9 00:29:31.840987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:29:31.877460 kernel: loop3: detected capacity change from 0 to 205544 May 9 00:29:31.886447 kernel: loop4: detected capacity change from 0 to 140768 May 9 00:29:31.901451 kernel: loop5: detected capacity change from 0 to 142488 May 9 00:29:31.911212 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:29:31.912038 (sd-merge)[1200]: Merged extensions into '/usr'. May 9 00:29:31.916640 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:29:31.916660 systemd[1]: Reloading... May 9 00:29:32.005834 zram_generator::config[1225]: No configuration found. May 9 00:29:32.185758 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:29:32.195471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:29:32.248658 systemd[1]: Reloading finished in 331 ms. May 9 00:29:32.281559 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:29:32.283101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:29:32.297648 systemd[1]: Starting ensure-sysext.service... May 9 00:29:32.299779 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:29:32.308476 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... May 9 00:29:32.308493 systemd[1]: Reloading... May 9 00:29:32.334311 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:29:32.334722 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:29:32.335950 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:29:32.336405 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 9 00:29:32.336537 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 9 00:29:32.343812 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:29:32.343831 systemd-tmpfiles[1264]: Skipping /boot May 9 00:29:32.384455 zram_generator::config[1293]: No configuration found. May 9 00:29:32.393565 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:29:32.393587 systemd-tmpfiles[1264]: Skipping /boot May 9 00:29:32.518465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:29:32.568547 systemd[1]: Reloading finished in 259 ms. May 9 00:29:32.588574 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:29:32.601091 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:29:32.610184 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:29:32.612742 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:29:32.615586 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:29:32.620661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:29:32.625995 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:29:32.633355 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:29:32.637209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:32.637389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:29:32.638673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:29:32.643902 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:29:32.647432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:29:32.649641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:29:32.654716 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:29:32.658515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:32.659985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:29:32.660226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:29:32.662118 systemd-udevd[1335]: Using default interface naming scheme 'v255'. May 9 00:29:32.662804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:29:32.663112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:29:32.665150 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:29:32.665518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:29:32.667608 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:29:32.681935 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:29:32.685818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:32.686064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:29:32.686631 augenrules[1359]: No rules May 9 00:29:32.694720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:29:32.698513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:29:32.703889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:29:32.705047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:29:32.708810 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:29:32.709896 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:32.710750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:29:32.715476 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:29:32.717211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:29:32.717405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:29:32.719811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:29:32.719988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:29:32.721593 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:29:32.729568 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:29:32.729784 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:29:32.740179 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:29:32.758974 systemd[1]: Finished ensure-sysext.service. May 9 00:29:32.766875 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:29:32.767685 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:32.767832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:29:32.813924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:29:32.817188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:29:32.822604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:29:32.825960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:29:32.827630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:29:32.830843 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:29:32.835618 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:29:32.836309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:29:32.836343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:29:32.837021 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:29:32.841841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:29:32.842032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:29:32.844007 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:29:32.844198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:29:32.848292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:29:32.852451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1394) May 9 00:29:32.854022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:29:32.857175 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:29:32.857372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:29:32.868230 systemd-resolved[1334]: Positive Trust Anchors: May 9 00:29:32.868246 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:29:32.868276 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:29:32.871552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:29:32.871619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:29:32.875825 systemd-resolved[1334]: Defaulting to hostname 'linux'. May 9 00:29:32.878894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:29:32.880594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:29:32.886815 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:29:33.068683 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:29:33.081475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:29:33.086664 kernel: ACPI: button: Power Button [PWRF] May 9 00:29:33.090732 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:29:33.094758 systemd-networkd[1405]: lo: Link UP May 9 00:29:33.094770 systemd-networkd[1405]: lo: Gained carrier May 9 00:29:33.096572 systemd-networkd[1405]: Enumeration completed May 9 00:29:33.096776 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:29:33.097478 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:29:33.097487 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:29:33.098118 systemd[1]: Reached target network.target - Network. May 9 00:29:33.098364 systemd-networkd[1405]: eth0: Link UP May 9 00:29:33.098374 systemd-networkd[1405]: eth0: Gained carrier May 9 00:29:33.098386 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:29:33.107611 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:29:33.107921 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:29:33.108122 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:29:33.108316 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:29:33.108619 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:29:33.113492 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:29:33.119468 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 9 00:29:33.125097 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:29:33.126800 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:29:33.127540 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:29:33.127595 systemd-timesyncd[1406]: Initial clock synchronization to Fri 2025-05-09 00:29:33.519921 UTC. May 9 00:29:33.171745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:29:33.222446 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:29:33.234927 kernel: kvm_amd: TSC scaling supported May 9 00:29:33.235021 kernel: kvm_amd: Nested Virtualization enabled May 9 00:29:33.235035 kernel: kvm_amd: Nested Paging enabled May 9 00:29:33.236031 kernel: kvm_amd: LBR virtualization supported May 9 00:29:33.236084 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:29:33.237461 kernel: kvm_amd: Virtual GIF supported May 9 00:29:33.258448 kernel: EDAC MC: Ver: 3.0.0 May 9 00:29:33.271343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:29:33.283744 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:29:33.295617 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:29:33.308441 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:29:33.338372 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:29:33.339923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:29:33.341073 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:29:33.342265 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:29:33.343560 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:29:33.345063 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:29:33.346492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:29:33.347767 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:29:33.349030 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:29:33.349057 systemd[1]: Reached target paths.target - Path Units. May 9 00:29:33.349985 systemd[1]: Reached target timers.target - Timer Units. May 9 00:29:33.351855 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:29:33.354632 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:29:33.369707 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:29:33.372304 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:29:33.374060 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:29:33.375379 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:29:33.376414 systemd[1]: Reached target basic.target - Basic System. May 9 00:29:33.377577 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:29:33.377604 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:29:33.378827 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:29:33.381116 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:29:33.384464 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:29:33.384799 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:29:33.391620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:29:33.392885 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:29:33.394636 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:29:33.400075 jq[1444]: false May 9 00:29:33.400194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:29:33.404616 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:29:33.415680 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:29:33.418591 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:29:33.419462 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:29:33.420436 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:29:33.423937 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:29:33.427327 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:29:33.430540 extend-filesystems[1445]: Found loop3 May 9 00:29:33.432543 extend-filesystems[1445]: Found loop4 May 9 00:29:33.432543 extend-filesystems[1445]: Found loop5 May 9 00:29:33.432543 extend-filesystems[1445]: Found sr0 May 9 00:29:33.432543 extend-filesystems[1445]: Found vda May 9 00:29:33.432543 extend-filesystems[1445]: Found vda1 May 9 00:29:33.432543 extend-filesystems[1445]: Found vda2 May 9 00:29:33.432543 extend-filesystems[1445]: Found vda3 May 9 00:29:33.432543 extend-filesystems[1445]: Found usr May 9 00:29:33.432543 extend-filesystems[1445]: Found vda4 May 9 00:29:33.432543 extend-filesystems[1445]: Found vda6 May 9 00:29:33.432543 extend-filesystems[1445]: Found vda7 May 9 00:29:33.432543 extend-filesystems[1445]: Found vda9 May 9 00:29:33.432543 extend-filesystems[1445]: Checking size of /dev/vda9 May 9 00:29:33.439961 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:29:33.446526 dbus-daemon[1443]: [system] SELinux support is enabled May 9 00:29:33.440218 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:29:33.440864 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:29:33.457683 jq[1457]: true May 9 00:29:33.441183 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:29:33.454212 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:29:33.460594 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:29:33.460864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:29:33.474535 extend-filesystems[1445]: Resized partition /dev/vda9 May 9 00:29:33.479596 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) May 9 00:29:33.485536 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:29:33.486262 jq[1466]: true May 9 00:29:33.486660 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:29:33.485572 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:29:33.488379 update_engine[1456]: I20250509 00:29:33.486644 1456 main.cc:92] Flatcar Update Engine starting May 9 00:29:33.489229 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:29:33.489358 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:29:33.499924 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:29:33.501516 update_engine[1456]: I20250509 00:29:33.501073 1456 update_check_scheduler.cc:74] Next update check in 8m59s May 9 00:29:33.501684 systemd[1]: Started update-engine.service - Update Engine. May 9 00:29:33.511635 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1394) May 9 00:29:33.509217 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:29:33.521547 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:29:33.551117 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:29:33.551117 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:29:33.551117 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:29:33.560538 extend-filesystems[1445]: Resized filesystem in /dev/vda9 May 9 00:29:33.555903 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:29:33.556168 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:29:33.591054 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:29:33.591090 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:29:33.594040 systemd-logind[1452]: New seat seat0. May 9 00:29:33.595274 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:29:33.602578 bash[1494]: Updated "/home/core/.ssh/authorized_keys" May 9 00:29:33.605589 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:29:33.606898 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:29:33.608084 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:29:33.623819 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:29:33.700012 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:29:33.711912 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:29:33.772282 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:29:33.772664 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:29:33.779724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:29:33.800301 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:29:33.812071 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:29:33.815556 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:29:33.817275 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:29:34.013566 containerd[1473]: time="2025-05-09T00:29:34.013348512Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:29:34.041660 containerd[1473]: time="2025-05-09T00:29:34.041574803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.043757 containerd[1473]: time="2025-05-09T00:29:34.043709635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:29:34.043757 containerd[1473]: time="2025-05-09T00:29:34.043742627Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:29:34.043757 containerd[1473]: time="2025-05-09T00:29:34.043759033Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:29:34.044009 containerd[1473]: time="2025-05-09T00:29:34.043986003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:29:34.044051 containerd[1473]: time="2025-05-09T00:29:34.044011123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.044110 containerd[1473]: time="2025-05-09T00:29:34.044093272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:29:34.044131 containerd[1473]: time="2025-05-09T00:29:34.044110067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.044423 containerd[1473]: time="2025-05-09T00:29:34.044396042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:29:34.044423 containerd[1473]: time="2025-05-09T00:29:34.044419270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.044489 containerd[1473]: time="2025-05-09T00:29:34.044434131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:29:34.044489 containerd[1473]: time="2025-05-09T00:29:34.044444316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.044617 containerd[1473]: time="2025-05-09T00:29:34.044593499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.044897 containerd[1473]: time="2025-05-09T00:29:34.044874292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:29:34.045051 containerd[1473]: time="2025-05-09T00:29:34.045023076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:29:34.045051 containerd[1473]: time="2025-05-09T00:29:34.045043403Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:29:34.045235 containerd[1473]: time="2025-05-09T00:29:34.045212272Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:29:34.045303 containerd[1473]: time="2025-05-09T00:29:34.045283722Z" level=info msg="metadata content store policy set" policy=shared May 9 00:29:34.052161 containerd[1473]: time="2025-05-09T00:29:34.052117310Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:29:34.052240 containerd[1473]: time="2025-05-09T00:29:34.052177796Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:29:34.052240 containerd[1473]: time="2025-05-09T00:29:34.052198586Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:29:34.052323 containerd[1473]: time="2025-05-09T00:29:34.052264023Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:29:34.052323 containerd[1473]: time="2025-05-09T00:29:34.052293525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:29:34.052540 containerd[1473]: time="2025-05-09T00:29:34.052503669Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:29:34.052874 containerd[1473]: time="2025-05-09T00:29:34.052844949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:29:34.053024 containerd[1473]: time="2025-05-09T00:29:34.052997968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:29:34.053024 containerd[1473]: time="2025-05-09T00:29:34.053019556Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:29:34.053092 containerd[1473]: time="2025-05-09T00:29:34.053033587Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:29:34.053092 containerd[1473]: time="2025-05-09T00:29:34.053047892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053092 containerd[1473]: time="2025-05-09T00:29:34.053065391Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053092 containerd[1473]: time="2025-05-09T00:29:34.053082145Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053095797Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053111700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053124533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053137145Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053149085Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053169129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053182645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053194574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053207712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053220429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053234061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053246715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053259728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053271 containerd[1473]: time="2025-05-09T00:29:34.053272150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053301748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053318038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053331134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053346080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053361897Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053385693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053398148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053409016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053500287Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053519269Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053531439Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053543484Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:29:34.053663 containerd[1473]: time="2025-05-09T00:29:34.053552849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:29:34.054009 containerd[1473]: time="2025-05-09T00:29:34.053586229Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:29:34.054009 containerd[1473]: time="2025-05-09T00:29:34.053604139Z" level=info msg="NRI interface is disabled by configuration." May 9 00:29:34.054009 containerd[1473]: time="2025-05-09T00:29:34.053618538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:29:34.054103 containerd[1473]: time="2025-05-09T00:29:34.053938388Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:29:34.054103 containerd[1473]: time="2025-05-09T00:29:34.053993210Z" level=info msg="Connect containerd service" May 9 00:29:34.054103 containerd[1473]: time="2025-05-09T00:29:34.054037175Z" level=info msg="using legacy CRI server" May 9 00:29:34.054103 containerd[1473]: time="2025-05-09T00:29:34.054048936Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:29:34.054423 containerd[1473]: time="2025-05-09T00:29:34.054226349Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:29:34.055083 containerd[1473]: time="2025-05-09T00:29:34.055046226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:29:34.055314 containerd[1473]: time="2025-05-09T00:29:34.055261677Z" level=info msg="Start subscribing containerd event" May 9 00:29:34.055363 containerd[1473]: time="2025-05-09T00:29:34.055332012Z" level=info msg="Start recovering state" May 9 00:29:34.055772 containerd[1473]: time="2025-05-09T00:29:34.055712244Z" level=info msg="Start event monitor" May 9 00:29:34.055890 containerd[1473]: time="2025-05-09T00:29:34.055855961Z" level=info msg="Start snapshots syncer" May 9 00:29:34.056461 containerd[1473]: time="2025-05-09T00:29:34.055952709Z" level=info msg="Start cni network conf syncer for default" May 9 00:29:34.056461 containerd[1473]: time="2025-05-09T00:29:34.055990535Z" level=info msg="Start streaming server" May 9 00:29:34.056461 containerd[1473]: time="2025-05-09T00:29:34.056024094Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:29:34.056461 containerd[1473]: time="2025-05-09T00:29:34.056099947Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:29:34.056461 containerd[1473]: time="2025-05-09T00:29:34.056188917Z" level=info msg="containerd successfully booted in 0.045167s" May 9 00:29:34.056609 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:29:35.101379 systemd-networkd[1405]: eth0: Gained IPv6LL May 9 00:29:35.107412 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:29:35.109480 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:29:35.119700 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:29:35.122512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:35.124930 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:29:35.149695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:29:35.151474 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:29:35.151722 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:29:35.154254 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:29:36.389546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:36.391704 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:29:36.395566 systemd[1]: Startup finished in 1.168s (kernel) + 4.988s (initrd) + 5.737s (userspace) = 11.895s. May 9 00:29:36.397883 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:29:36.983017 kubelet[1548]: E0509 00:29:36.982933 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:29:36.987397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:29:36.987634 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:29:36.988016 systemd[1]: kubelet.service: Consumed 1.667s CPU time. May 9 00:29:38.392129 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:29:38.393570 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). May 9 00:29:38.435693 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:38.437925 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:38.447785 systemd-logind[1452]: New session 1 of user core. May 9 00:29:38.449515 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:29:38.466897 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:29:38.482277 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:29:38.495990 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:29:38.499608 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:29:38.616238 systemd[1565]: Queued start job for default target default.target. May 9 00:29:38.627932 systemd[1565]: Created slice app.slice - User Application Slice. May 9 00:29:38.627965 systemd[1565]: Reached target paths.target - Paths. May 9 00:29:38.627984 systemd[1565]: Reached target timers.target - Timers. May 9 00:29:38.629684 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:29:38.641982 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:29:38.642117 systemd[1565]: Reached target sockets.target - Sockets. May 9 00:29:38.642136 systemd[1565]: Reached target basic.target - Basic System. May 9 00:29:38.642185 systemd[1565]: Reached target default.target - Main User Target. May 9 00:29:38.642227 systemd[1565]: Startup finished in 135ms. May 9 00:29:38.642669 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:29:38.644356 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:29:38.710777 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:45664.service - OpenSSH per-connection server daemon (10.0.0.1:45664). May 9 00:29:38.742783 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:38.745004 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:38.749970 systemd-logind[1452]: New session 2 of user core. May 9 00:29:38.764624 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:29:38.823043 sshd[1576]: pam_unix(sshd:session): session closed for user core May 9 00:29:38.832477 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:45664.service: Deactivated successfully. May 9 00:29:38.834566 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:29:38.836537 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. May 9 00:29:38.847894 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:45678.service - OpenSSH per-connection server daemon (10.0.0.1:45678). May 9 00:29:38.849108 systemd-logind[1452]: Removed session 2. May 9 00:29:38.875694 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 45678 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:38.877408 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:38.881742 systemd-logind[1452]: New session 3 of user core. May 9 00:29:38.892692 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:29:38.947731 sshd[1583]: pam_unix(sshd:session): session closed for user core May 9 00:29:38.966315 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:45678.service: Deactivated successfully. May 9 00:29:38.968538 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:29:38.970261 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. May 9 00:29:38.979766 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:45680.service - OpenSSH per-connection server daemon (10.0.0.1:45680). May 9 00:29:38.980873 systemd-logind[1452]: Removed session 3. May 9 00:29:39.006135 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 45680 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:39.007772 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:39.012072 systemd-logind[1452]: New session 4 of user core. May 9 00:29:39.028624 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:29:39.084583 sshd[1590]: pam_unix(sshd:session): session closed for user core May 9 00:29:39.102078 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:45680.service: Deactivated successfully. May 9 00:29:39.104289 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:29:39.106042 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. May 9 00:29:39.115858 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:45682.service - OpenSSH per-connection server daemon (10.0.0.1:45682). May 9 00:29:39.117282 systemd-logind[1452]: Removed session 4. May 9 00:29:39.144993 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 45682 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:39.146791 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:39.150916 systemd-logind[1452]: New session 5 of user core. May 9 00:29:39.165629 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:29:39.227986 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:29:39.228375 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:29:39.244437 sudo[1600]: pam_unix(sudo:session): session closed for user root May 9 00:29:39.246506 sshd[1597]: pam_unix(sshd:session): session closed for user core May 9 00:29:39.257345 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:45682.service: Deactivated successfully. May 9 00:29:39.259295 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:29:39.260926 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. May 9 00:29:39.270753 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:45686.service - OpenSSH per-connection server daemon (10.0.0.1:45686). May 9 00:29:39.271996 systemd-logind[1452]: Removed session 5. May 9 00:29:39.298769 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 45686 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:39.300368 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:39.304803 systemd-logind[1452]: New session 6 of user core. May 9 00:29:39.315580 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:29:39.373571 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:29:39.374019 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:29:39.378643 sudo[1609]: pam_unix(sudo:session): session closed for user root May 9 00:29:39.385942 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:29:39.386294 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:29:39.405691 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:29:39.407570 auditctl[1612]: No rules May 9 00:29:39.408901 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:29:39.409271 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:29:39.411491 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:29:39.447319 augenrules[1630]: No rules May 9 00:29:39.449134 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:29:39.450381 sudo[1608]: pam_unix(sudo:session): session closed for user root May 9 00:29:39.452216 sshd[1605]: pam_unix(sshd:session): session closed for user core May 9 00:29:39.470595 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:45686.service: Deactivated successfully. May 9 00:29:39.472353 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:29:39.473850 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. May 9 00:29:39.475346 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:45690.service - OpenSSH per-connection server daemon (10.0.0.1:45690). May 9 00:29:39.476292 systemd-logind[1452]: Removed session 6. May 9 00:29:39.520846 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 45690 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:39.522514 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:39.526638 systemd-logind[1452]: New session 7 of user core. May 9 00:29:39.540655 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:29:39.596193 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:29:39.596581 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:29:39.622923 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:29:39.643232 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:29:39.643577 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:29:40.110162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:40.110424 systemd[1]: kubelet.service: Consumed 1.667s CPU time. May 9 00:29:40.126828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:40.158529 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit session-7.scope)... May 9 00:29:40.158550 systemd[1]: Reloading... May 9 00:29:40.241524 zram_generator::config[1723]: No configuration found. May 9 00:29:40.452285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:29:40.531372 systemd[1]: Reloading finished in 372 ms. May 9 00:29:40.579252 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:29:40.579351 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:29:40.579646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:40.582505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:40.736571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:40.742305 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:29:40.786572 kubelet[1769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:29:40.786572 kubelet[1769]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:29:40.786572 kubelet[1769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:29:40.787562 kubelet[1769]: I0509 00:29:40.787510 1769 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:29:41.105613 kubelet[1769]: I0509 00:29:41.104497 1769 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:29:41.105613 kubelet[1769]: I0509 00:29:41.104560 1769 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:29:41.247587 kubelet[1769]: I0509 00:29:41.247509 1769 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:29:41.272893 kubelet[1769]: I0509 00:29:41.272832 1769 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:29:41.281463 kubelet[1769]: E0509 00:29:41.281381 1769 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:29:41.281463 kubelet[1769]: I0509 00:29:41.281461 1769 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:29:41.290222 kubelet[1769]: I0509 00:29:41.290177 1769 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:29:41.291756 kubelet[1769]: I0509 00:29:41.291715 1769 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:29:41.291976 kubelet[1769]: I0509 00:29:41.291923 1769 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:29:41.293454 kubelet[1769]: I0509 00:29:41.291961 1769 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:29:41.293454 kubelet[1769]: I0509 00:29:41.293010 1769 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:29:41.293454 kubelet[1769]: I0509 00:29:41.293030 1769 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:29:41.293454 kubelet[1769]: I0509 00:29:41.293211 1769 state_mem.go:36] "Initialized new in-memory state store" May 9 00:29:41.294909 kubelet[1769]: I0509 00:29:41.294874 1769 kubelet.go:408] "Attempting to sync node with API server" May 9 00:29:41.294909 kubelet[1769]: I0509 00:29:41.294907 1769 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:29:41.295018 kubelet[1769]: I0509 00:29:41.294971 1769 kubelet.go:314] "Adding apiserver pod source" May 9 00:29:41.295018 kubelet[1769]: I0509 00:29:41.295003 1769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:29:41.295126 kubelet[1769]: E0509 00:29:41.295091 1769 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:41.295151 kubelet[1769]: E0509 00:29:41.295138 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:41.300615 kubelet[1769]: I0509 00:29:41.300575 1769 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:29:41.302203 kubelet[1769]: I0509 00:29:41.302179 1769 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:29:41.303372 kubelet[1769]: W0509 00:29:41.303326 1769 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:29:41.304375 kubelet[1769]: I0509 00:29:41.304217 1769 server.go:1269] "Started kubelet" May 9 00:29:41.305503 kubelet[1769]: I0509 00:29:41.304842 1769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:29:41.305503 kubelet[1769]: I0509 00:29:41.305290 1769 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:29:41.305503 kubelet[1769]: I0509 00:29:41.305375 1769 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:29:41.307066 kubelet[1769]: I0509 00:29:41.307033 1769 server.go:460] "Adding debug handlers to kubelet server" May 9 00:29:41.308675 kubelet[1769]: I0509 00:29:41.308502 1769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:29:41.308726 kubelet[1769]: I0509 00:29:41.308706 1769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:29:41.312461 kubelet[1769]: I0509 00:29:41.311022 1769 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:29:41.312461 kubelet[1769]: I0509 00:29:41.311173 1769 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:29:41.312461 kubelet[1769]: I0509 00:29:41.311302 1769 reconciler.go:26] "Reconciler: start to sync state" May 9 00:29:41.312461 kubelet[1769]: E0509 00:29:41.311648 1769 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" May 9 00:29:41.312461 kubelet[1769]: I0509 00:29:41.311891 1769 factory.go:221] Registration of the systemd container factory successfully May 9 00:29:41.312461 kubelet[1769]: I0509 00:29:41.311991 1769 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:29:41.312793 kubelet[1769]: E0509 00:29:41.312744 1769 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:29:41.313554 kubelet[1769]: I0509 00:29:41.313528 1769 factory.go:221] Registration of the containerd container factory successfully May 9 00:29:41.325532 kubelet[1769]: E0509 00:29:41.325490 1769 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.53\" not found" node="10.0.0.53" May 9 00:29:41.327254 kubelet[1769]: I0509 00:29:41.327232 1769 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:29:41.327297 kubelet[1769]: I0509 00:29:41.327258 1769 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:29:41.327297 kubelet[1769]: I0509 00:29:41.327279 1769 state_mem.go:36] "Initialized new in-memory state store" May 9 00:29:41.412128 kubelet[1769]: E0509 00:29:41.411955 1769 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" May 9 00:29:41.512551 kubelet[1769]: E0509 00:29:41.512483 1769 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" May 9 00:29:41.613678 kubelet[1769]: E0509 00:29:41.613628 1769 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" May 9 00:29:41.714343 kubelet[1769]: E0509 00:29:41.714193 1769 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" May 9 00:29:41.774823 kubelet[1769]: I0509 00:29:41.774780 1769 policy_none.go:49] "None policy: Start" May 9 00:29:41.775819 kubelet[1769]: I0509 00:29:41.775778 1769 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:29:41.775819 kubelet[1769]: I0509 00:29:41.775813 1769 state_mem.go:35] "Initializing new in-memory state store" May 9 00:29:41.782478 kubelet[1769]: E0509 00:29:41.782448 1769 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.53" not found May 9 00:29:41.785735 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:29:41.797253 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:29:41.801325 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:29:41.802045 kubelet[1769]: I0509 00:29:41.801481 1769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:29:41.803087 kubelet[1769]: I0509 00:29:41.803045 1769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:29:41.803327 kubelet[1769]: I0509 00:29:41.803295 1769 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:29:41.803448 kubelet[1769]: I0509 00:29:41.803341 1769 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:29:41.804128 kubelet[1769]: E0509 00:29:41.803934 1769 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:29:41.807876 kubelet[1769]: I0509 00:29:41.807555 1769 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:29:41.807876 kubelet[1769]: I0509 00:29:41.807838 1769 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:29:41.807956 kubelet[1769]: I0509 00:29:41.807858 1769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:29:41.808219 kubelet[1769]: I0509 00:29:41.808200 1769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:29:41.809532 kubelet[1769]: E0509 00:29:41.809493 1769 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.53\" not found" May 9 00:29:41.909087 kubelet[1769]: I0509 00:29:41.909031 1769 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.53" May 9 00:29:41.913801 kubelet[1769]: I0509 00:29:41.913747 1769 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.53" May 9 00:29:41.922879 kubelet[1769]: I0509 00:29:41.922844 1769 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 9 00:29:41.923368 containerd[1473]: time="2025-05-09T00:29:41.923285243Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:29:41.923768 kubelet[1769]: I0509 00:29:41.923567 1769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 9 00:29:42.250381 kubelet[1769]: I0509 00:29:42.250322 1769 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 9 00:29:42.250664 kubelet[1769]: W0509 00:29:42.250523 1769 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 00:29:42.250664 kubelet[1769]: W0509 00:29:42.250550 1769 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 00:29:42.250664 kubelet[1769]: W0509 00:29:42.250560 1769 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 00:29:42.262156 sudo[1641]: pam_unix(sudo:session): session closed for user root May 9 00:29:42.264122 sshd[1638]: pam_unix(sshd:session): session closed for user core May 9 00:29:42.268604 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:45690.service: Deactivated successfully. May 9 00:29:42.270894 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:29:42.271524 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. May 9 00:29:42.272591 systemd-logind[1452]: Removed session 7. May 9 00:29:42.296043 kubelet[1769]: I0509 00:29:42.295980 1769 apiserver.go:52] "Watching apiserver" May 9 00:29:42.296158 kubelet[1769]: E0509 00:29:42.295988 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:42.300186 kubelet[1769]: E0509 00:29:42.300017 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:42.307571 systemd[1]: Created slice kubepods-besteffort-poda70aa38f_7d8a_4119_a2fe_9190f94492aa.slice - libcontainer container kubepods-besteffort-poda70aa38f_7d8a_4119_a2fe_9190f94492aa.slice. May 9 00:29:42.312203 kubelet[1769]: I0509 00:29:42.312161 1769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:29:42.317061 kubelet[1769]: I0509 00:29:42.317035 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-cni-bin-dir\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317126 kubelet[1769]: I0509 00:29:42.317065 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/026f136c-abea-42e4-91de-cf798cfb70e0-varrun\") pod \"csi-node-driver-nfq8m\" (UID: \"026f136c-abea-42e4-91de-cf798cfb70e0\") " pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:42.317126 kubelet[1769]: I0509 00:29:42.317083 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a70aa38f-7d8a-4119-a2fe-9190f94492aa-xtables-lock\") pod \"kube-proxy-67jp6\" (UID: \"a70aa38f-7d8a-4119-a2fe-9190f94492aa\") " pod="kube-system/kube-proxy-67jp6" May 9 00:29:42.317126 kubelet[1769]: I0509 00:29:42.317096 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a70aa38f-7d8a-4119-a2fe-9190f94492aa-lib-modules\") pod \"kube-proxy-67jp6\" (UID: \"a70aa38f-7d8a-4119-a2fe-9190f94492aa\") " pod="kube-system/kube-proxy-67jp6" May 9 00:29:42.317126 kubelet[1769]: I0509 00:29:42.317114 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-xtables-lock\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317241 kubelet[1769]: I0509 00:29:42.317133 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2c446894-0492-42ef-b854-f578dc18f66f-node-certs\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317241 kubelet[1769]: I0509 00:29:42.317156 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-var-run-calico\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317241 kubelet[1769]: I0509 00:29:42.317181 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-flexvol-driver-host\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317241 kubelet[1769]: I0509 00:29:42.317212 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/026f136c-abea-42e4-91de-cf798cfb70e0-socket-dir\") pod \"csi-node-driver-nfq8m\" (UID: \"026f136c-abea-42e4-91de-cf798cfb70e0\") " pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:42.317241 kubelet[1769]: I0509 00:29:42.317235 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vb8\" (UniqueName: \"kubernetes.io/projected/026f136c-abea-42e4-91de-cf798cfb70e0-kube-api-access-k4vb8\") pod \"csi-node-driver-nfq8m\" (UID: \"026f136c-abea-42e4-91de-cf798cfb70e0\") " pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:42.317359 kubelet[1769]: I0509 00:29:42.317252 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhdb\" (UniqueName: \"kubernetes.io/projected/a70aa38f-7d8a-4119-a2fe-9190f94492aa-kube-api-access-djhdb\") pod \"kube-proxy-67jp6\" (UID: \"a70aa38f-7d8a-4119-a2fe-9190f94492aa\") " pod="kube-system/kube-proxy-67jp6" May 9 00:29:42.317359 kubelet[1769]: I0509 00:29:42.317270 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-lib-modules\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317359 kubelet[1769]: I0509 00:29:42.317291 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-policysync\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317359 kubelet[1769]: I0509 00:29:42.317312 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c446894-0492-42ef-b854-f578dc18f66f-tigera-ca-bundle\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317359 kubelet[1769]: I0509 00:29:42.317333 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/026f136c-abea-42e4-91de-cf798cfb70e0-kubelet-dir\") pod \"csi-node-driver-nfq8m\" (UID: \"026f136c-abea-42e4-91de-cf798cfb70e0\") " pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:42.317509 kubelet[1769]: I0509 00:29:42.317349 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a70aa38f-7d8a-4119-a2fe-9190f94492aa-kube-proxy\") pod \"kube-proxy-67jp6\" (UID: \"a70aa38f-7d8a-4119-a2fe-9190f94492aa\") " pod="kube-system/kube-proxy-67jp6" May 9 00:29:42.317509 kubelet[1769]: I0509 00:29:42.317363 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9f8\" (UniqueName: \"kubernetes.io/projected/2c446894-0492-42ef-b854-f578dc18f66f-kube-api-access-6g9f8\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317509 kubelet[1769]: I0509 00:29:42.317389 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/026f136c-abea-42e4-91de-cf798cfb70e0-registration-dir\") pod \"csi-node-driver-nfq8m\" (UID: \"026f136c-abea-42e4-91de-cf798cfb70e0\") " pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:42.317509 kubelet[1769]: I0509 00:29:42.317460 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-var-lib-calico\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317509 kubelet[1769]: I0509 00:29:42.317484 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-cni-net-dir\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.317629 kubelet[1769]: I0509 00:29:42.317504 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2c446894-0492-42ef-b854-f578dc18f66f-cni-log-dir\") pod \"calico-node-cffx7\" (UID: \"2c446894-0492-42ef-b854-f578dc18f66f\") " pod="calico-system/calico-node-cffx7" May 9 00:29:42.321561 systemd[1]: Created slice kubepods-besteffort-pod2c446894_0492_42ef_b854_f578dc18f66f.slice - libcontainer container kubepods-besteffort-pod2c446894_0492_42ef_b854_f578dc18f66f.slice. May 9 00:29:42.420465 kubelet[1769]: E0509 00:29:42.420403 1769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:42.420465 kubelet[1769]: W0509 00:29:42.420455 1769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:42.420627 kubelet[1769]: E0509 00:29:42.420489 1769 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:42.422845 kubelet[1769]: E0509 00:29:42.422824 1769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:42.422845 kubelet[1769]: W0509 00:29:42.422839 1769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:42.422936 kubelet[1769]: E0509 00:29:42.422851 1769 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:42.430949 kubelet[1769]: E0509 00:29:42.430925 1769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:42.431466 kubelet[1769]: W0509 00:29:42.431048 1769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:42.431466 kubelet[1769]: E0509 00:29:42.431074 1769 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:42.431820 kubelet[1769]: E0509 00:29:42.431801 1769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:42.431820 kubelet[1769]: W0509 00:29:42.431819 1769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:42.431916 kubelet[1769]: E0509 00:29:42.431841 1769 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:42.432109 kubelet[1769]: E0509 00:29:42.432087 1769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:42.432109 kubelet[1769]: W0509 00:29:42.432105 1769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:42.432187 kubelet[1769]: E0509 00:29:42.432118 1769 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:42.620411 kubelet[1769]: E0509 00:29:42.620292 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:42.621161 containerd[1473]: time="2025-05-09T00:29:42.621108765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67jp6,Uid:a70aa38f-7d8a-4119-a2fe-9190f94492aa,Namespace:kube-system,Attempt:0,}" May 9 00:29:42.624349 kubelet[1769]: E0509 00:29:42.624322 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:42.624706 containerd[1473]: time="2025-05-09T00:29:42.624680077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cffx7,Uid:2c446894-0492-42ef-b854-f578dc18f66f,Namespace:calico-system,Attempt:0,}" May 9 00:29:43.297269 kubelet[1769]: E0509 00:29:43.297216 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:43.407354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685848198.mount: Deactivated successfully. May 9 00:29:43.415655 containerd[1473]: time="2025-05-09T00:29:43.415591832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:43.416957 containerd[1473]: time="2025-05-09T00:29:43.416894912Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:43.417727 containerd[1473]: time="2025-05-09T00:29:43.417672054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:29:43.418977 containerd[1473]: time="2025-05-09T00:29:43.418902249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:29:43.419988 containerd[1473]: time="2025-05-09T00:29:43.419951623Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:43.424201 containerd[1473]: time="2025-05-09T00:29:43.424159687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:43.425444 containerd[1473]: time="2025-05-09T00:29:43.425385328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 800.621869ms" May 9 00:29:43.426340 containerd[1473]: time="2025-05-09T00:29:43.426289644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 804.90162ms" May 9 00:29:43.541459 containerd[1473]: time="2025-05-09T00:29:43.541312679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:43.542110 containerd[1473]: time="2025-05-09T00:29:43.542013868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:43.542110 containerd[1473]: time="2025-05-09T00:29:43.542119498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:43.542110 containerd[1473]: time="2025-05-09T00:29:43.542143146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:43.542110 containerd[1473]: time="2025-05-09T00:29:43.542261423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:43.542968 containerd[1473]: time="2025-05-09T00:29:43.542648994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:43.542968 containerd[1473]: time="2025-05-09T00:29:43.542669510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:43.542968 containerd[1473]: time="2025-05-09T00:29:43.542825973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:43.619754 systemd[1]: Started cri-containerd-65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471.scope - libcontainer container 65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471. May 9 00:29:43.622033 systemd[1]: Started cri-containerd-a7ba6ec69e869b5cc8baeb2dbaa921ce92cdadf4d75ee704bdb21694f9b4f619.scope - libcontainer container a7ba6ec69e869b5cc8baeb2dbaa921ce92cdadf4d75ee704bdb21694f9b4f619. May 9 00:29:43.651304 containerd[1473]: time="2025-05-09T00:29:43.651245110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cffx7,Uid:2c446894-0492-42ef-b854-f578dc18f66f,Namespace:calico-system,Attempt:0,} returns sandbox id \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\"" May 9 00:29:43.653872 kubelet[1769]: E0509 00:29:43.653826 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:43.655588 containerd[1473]: time="2025-05-09T00:29:43.655538513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67jp6,Uid:a70aa38f-7d8a-4119-a2fe-9190f94492aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7ba6ec69e869b5cc8baeb2dbaa921ce92cdadf4d75ee704bdb21694f9b4f619\"" May 9 00:29:43.655820 containerd[1473]: time="2025-05-09T00:29:43.655765940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 00:29:43.656599 kubelet[1769]: E0509 00:29:43.656572 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:43.805758 kubelet[1769]: E0509 00:29:43.805650 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:44.298595 kubelet[1769]: E0509 00:29:44.298504 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:45.298702 kubelet[1769]: E0509 00:29:45.298654 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:45.808906 kubelet[1769]: E0509 00:29:45.808547 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:45.832077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816294068.mount: Deactivated successfully. May 9 00:29:46.168775 containerd[1473]: time="2025-05-09T00:29:46.168556930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:46.169370 containerd[1473]: time="2025-05-09T00:29:46.169338995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6859697" May 9 00:29:46.170639 containerd[1473]: time="2025-05-09T00:29:46.170573891Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:46.172543 containerd[1473]: time="2025-05-09T00:29:46.172492417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:46.173171 containerd[1473]: time="2025-05-09T00:29:46.173130414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.517311907s" May 9 00:29:46.173171 containerd[1473]: time="2025-05-09T00:29:46.173168720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 9 00:29:46.174504 containerd[1473]: time="2025-05-09T00:29:46.174470465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 00:29:46.175759 containerd[1473]: time="2025-05-09T00:29:46.175713011Z" level=info msg="CreateContainer within sandbox \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 00:29:46.193139 containerd[1473]: time="2025-05-09T00:29:46.193087717Z" level=info msg="CreateContainer within sandbox \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b\"" May 9 00:29:46.194314 containerd[1473]: time="2025-05-09T00:29:46.194258103Z" level=info msg="StartContainer for \"41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b\"" May 9 00:29:46.298887 kubelet[1769]: E0509 00:29:46.298823 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:46.337708 systemd[1]: Started cri-containerd-41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b.scope - libcontainer container 41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b. May 9 00:29:46.429044 containerd[1473]: time="2025-05-09T00:29:46.428898682Z" level=info msg="StartContainer for \"41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b\" returns successfully" May 9 00:29:46.454844 systemd[1]: cri-containerd-41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b.scope: Deactivated successfully. May 9 00:29:46.551821 containerd[1473]: time="2025-05-09T00:29:46.551699851Z" level=info msg="shim disconnected" id=41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b namespace=k8s.io May 9 00:29:46.551821 containerd[1473]: time="2025-05-09T00:29:46.551801878Z" level=warning msg="cleaning up after shim disconnected" id=41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b namespace=k8s.io May 9 00:29:46.551821 containerd[1473]: time="2025-05-09T00:29:46.551814131Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:29:46.772485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41cd0d6650711f3771e62f212b0b0e0f59a777d389ca812b0747f8e1a39c9a6b-rootfs.mount: Deactivated successfully. May 9 00:29:47.026981 kubelet[1769]: E0509 00:29:47.026843 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:47.300045 kubelet[1769]: E0509 00:29:47.299892 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:47.505135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310660181.mount: Deactivated successfully. May 9 00:29:47.804983 kubelet[1769]: E0509 00:29:47.804905 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:48.274859 containerd[1473]: time="2025-05-09T00:29:48.274685572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:48.276218 containerd[1473]: time="2025-05-09T00:29:48.276170546Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 9 00:29:48.277567 containerd[1473]: time="2025-05-09T00:29:48.277514671Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:48.279499 containerd[1473]: time="2025-05-09T00:29:48.279460796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:48.280106 containerd[1473]: time="2025-05-09T00:29:48.280060758Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.105550113s" May 9 00:29:48.280153 containerd[1473]: time="2025-05-09T00:29:48.280108536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 9 00:29:48.281177 containerd[1473]: time="2025-05-09T00:29:48.281107820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 00:29:48.282310 containerd[1473]: time="2025-05-09T00:29:48.282280528Z" level=info msg="CreateContainer within sandbox \"a7ba6ec69e869b5cc8baeb2dbaa921ce92cdadf4d75ee704bdb21694f9b4f619\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:29:48.297054 containerd[1473]: time="2025-05-09T00:29:48.297009482Z" level=info msg="CreateContainer within sandbox \"a7ba6ec69e869b5cc8baeb2dbaa921ce92cdadf4d75ee704bdb21694f9b4f619\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd4ddec91cc2e42fae22443447ef6a5d395892af4216b2cb324103e09db4519f\"" May 9 00:29:48.297454 containerd[1473]: time="2025-05-09T00:29:48.297389088Z" level=info msg="StartContainer for \"dd4ddec91cc2e42fae22443447ef6a5d395892af4216b2cb324103e09db4519f\"" May 9 00:29:48.300743 kubelet[1769]: E0509 00:29:48.300645 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:48.395628 systemd[1]: Started cri-containerd-dd4ddec91cc2e42fae22443447ef6a5d395892af4216b2cb324103e09db4519f.scope - libcontainer container dd4ddec91cc2e42fae22443447ef6a5d395892af4216b2cb324103e09db4519f. May 9 00:29:48.436744 containerd[1473]: time="2025-05-09T00:29:48.436697445Z" level=info msg="StartContainer for \"dd4ddec91cc2e42fae22443447ef6a5d395892af4216b2cb324103e09db4519f\" returns successfully" May 9 00:29:49.031636 kubelet[1769]: E0509 00:29:49.031579 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:49.067064 kubelet[1769]: I0509 00:29:49.066793 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-67jp6" podStartSLOduration=2.443056097 podStartE2EDuration="7.066754635s" podCreationTimestamp="2025-05-09 00:29:42 +0000 UTC" firstStartedPulling="2025-05-09 00:29:43.657233534 +0000 UTC m=+2.910491177" lastFinishedPulling="2025-05-09 00:29:48.280932082 +0000 UTC m=+7.534189715" observedRunningTime="2025-05-09 00:29:49.065726921 +0000 UTC m=+8.318984585" watchObservedRunningTime="2025-05-09 00:29:49.066754635 +0000 UTC m=+8.320012278" May 9 00:29:49.303842 kubelet[1769]: E0509 00:29:49.303653 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:49.806888 kubelet[1769]: E0509 00:29:49.805086 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:50.033289 kubelet[1769]: E0509 00:29:50.033246 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:50.304296 kubelet[1769]: E0509 00:29:50.304154 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:51.304386 kubelet[1769]: E0509 00:29:51.304317 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:51.804521 kubelet[1769]: E0509 00:29:51.804264 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:52.304700 kubelet[1769]: E0509 00:29:52.304673 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:52.762720 containerd[1473]: time="2025-05-09T00:29:52.762397622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:52.763913 containerd[1473]: time="2025-05-09T00:29:52.763853610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 9 00:29:52.765049 containerd[1473]: time="2025-05-09T00:29:52.765001522Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:52.767513 containerd[1473]: time="2025-05-09T00:29:52.767473341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:52.768411 containerd[1473]: time="2025-05-09T00:29:52.768362557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.487224852s" May 9 00:29:52.768411 containerd[1473]: time="2025-05-09T00:29:52.768401945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 9 00:29:52.770405 containerd[1473]: time="2025-05-09T00:29:52.770372646Z" level=info msg="CreateContainer within sandbox \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:29:52.786661 containerd[1473]: time="2025-05-09T00:29:52.786603779Z" level=info msg="CreateContainer within sandbox \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7\"" May 9 00:29:52.787068 containerd[1473]: time="2025-05-09T00:29:52.787038631Z" level=info msg="StartContainer for \"83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7\"" May 9 00:29:52.829625 systemd[1]: Started cri-containerd-83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7.scope - libcontainer container 83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7. May 9 00:29:52.878735 containerd[1473]: time="2025-05-09T00:29:52.878687820Z" level=info msg="StartContainer for \"83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7\" returns successfully" May 9 00:29:53.043446 kubelet[1769]: E0509 00:29:53.043302 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:53.305902 kubelet[1769]: E0509 00:29:53.305740 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:53.807452 kubelet[1769]: E0509 00:29:53.806842 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:54.045294 kubelet[1769]: E0509 00:29:54.044756 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:54.307171 kubelet[1769]: E0509 00:29:54.306936 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:55.307957 kubelet[1769]: E0509 00:29:55.307875 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:55.752602 systemd[1]: cri-containerd-83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7.scope: Deactivated successfully. May 9 00:29:55.753026 systemd[1]: cri-containerd-83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7.scope: Consumed 2.488s CPU time. May 9 00:29:55.789530 kubelet[1769]: I0509 00:29:55.789481 1769 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 00:29:55.812852 systemd[1]: Created slice kubepods-besteffort-pod026f136c_abea_42e4_91de_cf798cfb70e0.slice - libcontainer container kubepods-besteffort-pod026f136c_abea_42e4_91de_cf798cfb70e0.slice. May 9 00:29:55.816401 containerd[1473]: time="2025-05-09T00:29:55.816350753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfq8m,Uid:026f136c-abea-42e4-91de-cf798cfb70e0,Namespace:calico-system,Attempt:0,}" May 9 00:29:55.871061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7-rootfs.mount: Deactivated successfully. May 9 00:29:55.902319 containerd[1473]: time="2025-05-09T00:29:55.902255767Z" level=error msg="Failed to destroy network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:55.902823 containerd[1473]: time="2025-05-09T00:29:55.902789105Z" level=error msg="encountered an error cleaning up failed sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:55.902867 containerd[1473]: time="2025-05-09T00:29:55.902843196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfq8m,Uid:026f136c-abea-42e4-91de-cf798cfb70e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:55.903465 kubelet[1769]: E0509 00:29:55.903111 1769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:55.903465 kubelet[1769]: E0509 00:29:55.903201 1769 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:55.903465 kubelet[1769]: E0509 00:29:55.903225 1769 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nfq8m" May 9 00:29:55.903674 kubelet[1769]: E0509 00:29:55.903270 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nfq8m_calico-system(026f136c-abea-42e4-91de-cf798cfb70e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nfq8m_calico-system(026f136c-abea-42e4-91de-cf798cfb70e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:55.904611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731-shm.mount: Deactivated successfully. May 9 00:29:56.060047 kubelet[1769]: I0509 00:29:56.059852 1769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:29:56.061245 containerd[1473]: time="2025-05-09T00:29:56.061150327Z" level=info msg="StopPodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\"" May 9 00:29:56.061445 containerd[1473]: time="2025-05-09T00:29:56.061405224Z" level=info msg="Ensure that sandbox 5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731 in task-service has been cleanup successfully" May 9 00:29:56.128829 containerd[1473]: time="2025-05-09T00:29:56.128713534Z" level=error msg="StopPodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" failed" error="failed to destroy network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:56.129262 kubelet[1769]: E0509 00:29:56.129195 1769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:29:56.129772 kubelet[1769]: E0509 00:29:56.129287 1769 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731"} May 9 00:29:56.129870 kubelet[1769]: E0509 00:29:56.129786 1769 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"026f136c-abea-42e4-91de-cf798cfb70e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:56.129870 kubelet[1769]: E0509 00:29:56.129835 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"026f136c-abea-42e4-91de-cf798cfb70e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nfq8m" podUID="026f136c-abea-42e4-91de-cf798cfb70e0" May 9 00:29:56.311268 kubelet[1769]: E0509 00:29:56.311037 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:56.362130 containerd[1473]: time="2025-05-09T00:29:56.359441098Z" level=info msg="shim disconnected" id=83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7 namespace=k8s.io May 9 00:29:56.362130 containerd[1473]: time="2025-05-09T00:29:56.359537710Z" level=warning msg="cleaning up after shim disconnected" id=83323a72504b0e98332da0107c33dc30c734162e36b9a189bec5f807d9011ad7 namespace=k8s.io May 9 00:29:56.362130 containerd[1473]: time="2025-05-09T00:29:56.359559275Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:29:57.082537 kubelet[1769]: E0509 00:29:57.081893 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:57.086454 containerd[1473]: time="2025-05-09T00:29:57.086117049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 00:29:57.312241 kubelet[1769]: E0509 00:29:57.312080 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:58.313153 kubelet[1769]: E0509 00:29:58.313052 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:29:58.667169 systemd[1]: Created slice kubepods-besteffort-pod9829a1f8_fe60_429f_8fd3_583c0b79c315.slice - libcontainer container kubepods-besteffort-pod9829a1f8_fe60_429f_8fd3_583c0b79c315.slice. May 9 00:29:58.685447 kubelet[1769]: I0509 00:29:58.684588 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpj4\" (UniqueName: \"kubernetes.io/projected/9829a1f8-fe60-429f-8fd3-583c0b79c315-kube-api-access-fdpj4\") pod \"nginx-deployment-8587fbcb89-vhqg4\" (UID: \"9829a1f8-fe60-429f-8fd3-583c0b79c315\") " pod="default/nginx-deployment-8587fbcb89-vhqg4" May 9 00:29:58.972751 containerd[1473]: time="2025-05-09T00:29:58.972580468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhqg4,Uid:9829a1f8-fe60-429f-8fd3-583c0b79c315,Namespace:default,Attempt:0,}" May 9 00:29:59.232176 containerd[1473]: time="2025-05-09T00:29:59.231902836Z" level=error msg="Failed to destroy network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:59.232982 containerd[1473]: time="2025-05-09T00:29:59.232953620Z" level=error msg="encountered an error cleaning up failed sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:59.233026 containerd[1473]: time="2025-05-09T00:29:59.233008910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhqg4,Uid:9829a1f8-fe60-429f-8fd3-583c0b79c315,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:59.233322 kubelet[1769]: E0509 00:29:59.233267 1769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:59.233626 kubelet[1769]: E0509 00:29:59.233339 1769 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhqg4" May 9 00:29:59.233626 kubelet[1769]: E0509 00:29:59.233361 1769 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhqg4" May 9 00:29:59.233626 kubelet[1769]: E0509 00:29:59.233402 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vhqg4_default(9829a1f8-fe60-429f-8fd3-583c0b79c315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vhqg4_default(9829a1f8-fe60-429f-8fd3-583c0b79c315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhqg4" podUID="9829a1f8-fe60-429f-8fd3-583c0b79c315" May 9 00:29:59.233893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c-shm.mount: Deactivated successfully. May 9 00:29:59.313270 kubelet[1769]: E0509 00:29:59.313212 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:00.107709 kubelet[1769]: I0509 00:30:00.107626 1769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:00.109496 containerd[1473]: time="2025-05-09T00:30:00.108654543Z" level=info msg="StopPodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\"" May 9 00:30:00.109496 containerd[1473]: time="2025-05-09T00:30:00.108941737Z" level=info msg="Ensure that sandbox d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c in task-service has been cleanup successfully" May 9 00:30:00.171533 containerd[1473]: time="2025-05-09T00:30:00.171138946Z" level=error msg="StopPodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" failed" error="failed to destroy network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:30:00.173301 kubelet[1769]: E0509 00:30:00.173214 1769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:00.173449 kubelet[1769]: E0509 00:30:00.173314 1769 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c"} May 9 00:30:00.173449 kubelet[1769]: E0509 00:30:00.173369 1769 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9829a1f8-fe60-429f-8fd3-583c0b79c315\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:30:00.173449 kubelet[1769]: E0509 00:30:00.173405 1769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9829a1f8-fe60-429f-8fd3-583c0b79c315\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhqg4" podUID="9829a1f8-fe60-429f-8fd3-583c0b79c315" May 9 00:30:00.314006 kubelet[1769]: E0509 00:30:00.313933 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:01.295513 kubelet[1769]: E0509 00:30:01.295452 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:01.315325 kubelet[1769]: E0509 00:30:01.315291 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:02.317664 kubelet[1769]: E0509 00:30:02.317595 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:03.580536 kubelet[1769]: E0509 00:30:03.580473 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:03.740103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315784841.mount: Deactivated successfully. May 9 00:30:04.225043 containerd[1473]: time="2025-05-09T00:30:04.224936792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:04.225877 containerd[1473]: time="2025-05-09T00:30:04.225788578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 9 00:30:04.227279 containerd[1473]: time="2025-05-09T00:30:04.227229665Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:04.229243 containerd[1473]: time="2025-05-09T00:30:04.229191332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:04.229922 containerd[1473]: time="2025-05-09T00:30:04.229847126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.14367071s" May 9 00:30:04.229922 containerd[1473]: time="2025-05-09T00:30:04.229907724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 9 00:30:04.242495 containerd[1473]: time="2025-05-09T00:30:04.242398208Z" level=info msg="CreateContainer within sandbox \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 00:30:04.329180 containerd[1473]: time="2025-05-09T00:30:04.329052504Z" level=info msg="CreateContainer within sandbox \"65d14fce9f991074ef771a41a709987c12c28548f8239915008b5453c5126471\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ab58dcd54d76820526b61c156ac5bf8547e1d8261bafbc4773b18c2bc8926196\"" May 9 00:30:04.330030 containerd[1473]: time="2025-05-09T00:30:04.329970665Z" level=info msg="StartContainer for \"ab58dcd54d76820526b61c156ac5bf8547e1d8261bafbc4773b18c2bc8926196\"" May 9 00:30:04.384650 systemd[1]: Started cri-containerd-ab58dcd54d76820526b61c156ac5bf8547e1d8261bafbc4773b18c2bc8926196.scope - libcontainer container ab58dcd54d76820526b61c156ac5bf8547e1d8261bafbc4773b18c2bc8926196. May 9 00:30:04.436587 containerd[1473]: time="2025-05-09T00:30:04.436387021Z" level=info msg="StartContainer for \"ab58dcd54d76820526b61c156ac5bf8547e1d8261bafbc4773b18c2bc8926196\" returns successfully" May 9 00:30:04.581959 kubelet[1769]: E0509 00:30:04.581690 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:04.587352 kubelet[1769]: E0509 00:30:04.587320 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:30:05.054574 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 00:30:05.054812 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 00:30:05.582273 kubelet[1769]: E0509 00:30:05.582189 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:05.589383 kubelet[1769]: E0509 00:30:05.589328 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:30:06.582727 kubelet[1769]: E0509 00:30:06.582669 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:06.644484 kernel: bpftool[2595]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 00:30:06.914174 systemd-networkd[1405]: vxlan.calico: Link UP May 9 00:30:06.914186 systemd-networkd[1405]: vxlan.calico: Gained carrier May 9 00:30:07.583096 kubelet[1769]: E0509 00:30:07.583032 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:08.379645 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL May 9 00:30:08.584118 kubelet[1769]: E0509 00:30:08.584012 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:09.584323 kubelet[1769]: E0509 00:30:09.584221 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:10.585136 kubelet[1769]: E0509 00:30:10.585038 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:10.804945 containerd[1473]: time="2025-05-09T00:30:10.804860885Z" level=info msg="StopPodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\"" May 9 00:30:10.875650 kubelet[1769]: I0509 00:30:10.875110 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cffx7" podStartSLOduration=8.298974207 podStartE2EDuration="28.875087105s" podCreationTimestamp="2025-05-09 00:29:42 +0000 UTC" firstStartedPulling="2025-05-09 00:29:43.654865577 +0000 UTC m=+2.908123221" lastFinishedPulling="2025-05-09 00:30:04.230978476 +0000 UTC m=+23.484236119" observedRunningTime="2025-05-09 00:30:04.650219851 +0000 UTC m=+23.903477505" watchObservedRunningTime="2025-05-09 00:30:10.875087105 +0000 UTC m=+30.128344758" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:10.874 [INFO][2702] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:10.875 [INFO][2702] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" iface="eth0" netns="/var/run/netns/cni-c78786f1-f86a-09c7-2099-59a986c150b3" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:10.875 [INFO][2702] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" iface="eth0" netns="/var/run/netns/cni-c78786f1-f86a-09c7-2099-59a986c150b3" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:10.875 [INFO][2702] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" iface="eth0" netns="/var/run/netns/cni-c78786f1-f86a-09c7-2099-59a986c150b3" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:10.875 [INFO][2702] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:10.875 [INFO][2702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.017 [INFO][2711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.018 [INFO][2711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.018 [INFO][2711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.025 [WARNING][2711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.025 [INFO][2711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.026 [INFO][2711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:11.034057 containerd[1473]: 2025-05-09 00:30:11.031 [INFO][2702] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:11.034620 containerd[1473]: time="2025-05-09T00:30:11.034327336Z" level=info msg="TearDown network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" successfully" May 9 00:30:11.034620 containerd[1473]: time="2025-05-09T00:30:11.034359904Z" level=info msg="StopPodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" returns successfully" May 9 00:30:11.036632 containerd[1473]: time="2025-05-09T00:30:11.036587155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfq8m,Uid:026f136c-abea-42e4-91de-cf798cfb70e0,Namespace:calico-system,Attempt:1,}" May 9 00:30:11.036833 systemd[1]: run-netns-cni\x2dc78786f1\x2df86a\x2d09c7\x2d2099\x2d59a986c150b3.mount: Deactivated successfully. May 9 00:30:11.233384 systemd-networkd[1405]: calic865ee88b10: Link UP May 9 00:30:11.233730 systemd-networkd[1405]: calic865ee88b10: Gained carrier May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.099 [INFO][2719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-csi--node--driver--nfq8m-eth0 csi-node-driver- calico-system 026f136c-abea-42e4-91de-cf798cfb70e0 1032 0 2025-05-09 00:29:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.53 csi-node-driver-nfq8m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic865ee88b10 [] []}} ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.100 [INFO][2719] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.169 [INFO][2733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" HandleID="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.190 [INFO][2733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" HandleID="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f52c0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.53", "pod":"csi-node-driver-nfq8m", "timestamp":"2025-05-09 00:30:11.169469449 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.190 [INFO][2733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.190 [INFO][2733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.190 [INFO][2733] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.195 [INFO][2733] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.201 [INFO][2733] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.208 [INFO][2733] ipam/ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.210 [INFO][2733] ipam/ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.213 [INFO][2733] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.213 [INFO][2733] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.215 [INFO][2733] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5 May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.220 [INFO][2733] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.226 [INFO][2733] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.100.193/26] block=192.168.100.192/26 handle="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.226 [INFO][2733] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.193/26] handle="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" host="10.0.0.53" May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.226 [INFO][2733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:11.251529 containerd[1473]: 2025-05-09 00:30:11.226 [INFO][2733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.193/26] IPv6=[] ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" HandleID="k8s-pod-network.8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.252633 containerd[1473]: 2025-05-09 00:30:11.229 [INFO][2719] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--nfq8m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"026f136c-abea-42e4-91de-cf798cfb70e0", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"csi-node-driver-nfq8m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic865ee88b10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:11.252633 containerd[1473]: 2025-05-09 00:30:11.230 [INFO][2719] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.100.193/32] ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.252633 containerd[1473]: 2025-05-09 00:30:11.230 [INFO][2719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic865ee88b10 ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.252633 containerd[1473]: 2025-05-09 00:30:11.234 [INFO][2719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.252633 containerd[1473]: 2025-05-09 00:30:11.234 [INFO][2719] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--nfq8m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"026f136c-abea-42e4-91de-cf798cfb70e0", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5", Pod:"csi-node-driver-nfq8m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic865ee88b10", MAC:"96:53:44:2c:1c:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:11.252633 containerd[1473]: 2025-05-09 00:30:11.245 [INFO][2719] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5" Namespace="calico-system" Pod="csi-node-driver-nfq8m" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:11.280003 containerd[1473]: time="2025-05-09T00:30:11.279886747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:30:11.280003 containerd[1473]: time="2025-05-09T00:30:11.279957679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:30:11.280003 containerd[1473]: time="2025-05-09T00:30:11.279973693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:11.280577 containerd[1473]: time="2025-05-09T00:30:11.280073625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:11.316576 systemd[1]: Started cri-containerd-8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5.scope - libcontainer container 8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5. May 9 00:30:11.338917 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:30:11.355194 containerd[1473]: time="2025-05-09T00:30:11.355116165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfq8m,Uid:026f136c-abea-42e4-91de-cf798cfb70e0,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5\"" May 9 00:30:11.357314 containerd[1473]: time="2025-05-09T00:30:11.357267881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 9 00:30:11.585340 kubelet[1769]: E0509 00:30:11.585257 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:12.585778 kubelet[1769]: E0509 00:30:12.585713 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:12.804795 containerd[1473]: time="2025-05-09T00:30:12.804732287Z" level=info msg="StopPodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\"" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.024 [INFO][2819] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.024 [INFO][2819] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" iface="eth0" netns="/var/run/netns/cni-4b0a28a3-d99a-f58b-c40c-7ff87d22bdce" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.025 [INFO][2819] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" iface="eth0" netns="/var/run/netns/cni-4b0a28a3-d99a-f58b-c40c-7ff87d22bdce" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.025 [INFO][2819] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" iface="eth0" netns="/var/run/netns/cni-4b0a28a3-d99a-f58b-c40c-7ff87d22bdce" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.025 [INFO][2819] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.025 [INFO][2819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.049 [INFO][2828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.049 [INFO][2828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.049 [INFO][2828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.054 [WARNING][2828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.054 [INFO][2828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.056 [INFO][2828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:13.061020 containerd[1473]: 2025-05-09 00:30:13.058 [INFO][2819] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:13.061578 containerd[1473]: time="2025-05-09T00:30:13.061181215Z" level=info msg="TearDown network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" successfully" May 9 00:30:13.061578 containerd[1473]: time="2025-05-09T00:30:13.061217019Z" level=info msg="StopPodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" returns successfully" May 9 00:30:13.063037 systemd[1]: run-netns-cni\x2d4b0a28a3\x2dd99a\x2df58b\x2dc40c\x2d7ff87d22bdce.mount: Deactivated successfully. May 9 00:30:13.063883 containerd[1473]: time="2025-05-09T00:30:13.063845261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhqg4,Uid:9829a1f8-fe60-429f-8fd3-583c0b79c315,Namespace:default,Attempt:1,}" May 9 00:30:13.246038 systemd-networkd[1405]: calic865ee88b10: Gained IPv6LL May 9 00:30:13.423679 systemd-networkd[1405]: cali301432bf15e: Link UP May 9 00:30:13.424511 systemd-networkd[1405]: cali301432bf15e: Gained carrier May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.219 [INFO][2835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0 nginx-deployment-8587fbcb89- default 9829a1f8-fe60-429f-8fd3-583c0b79c315 1045 0 2025-05-09 00:29:58 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 nginx-deployment-8587fbcb89-vhqg4 eth0 default [] [] [kns.default ksa.default.default] cali301432bf15e [] []}} ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.219 [INFO][2835] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.296 [INFO][2850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" HandleID="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.314 [INFO][2850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" HandleID="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e8ab0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"nginx-deployment-8587fbcb89-vhqg4", "timestamp":"2025-05-09 00:30:13.296180598 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.314 [INFO][2850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.314 [INFO][2850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.314 [INFO][2850] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.319 [INFO][2850] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.334 [INFO][2850] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.348 [INFO][2850] ipam/ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.356 [INFO][2850] ipam/ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.363 [INFO][2850] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.363 [INFO][2850] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.369 [INFO][2850] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.384 [INFO][2850] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.404 [INFO][2850] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.100.194/26] block=192.168.100.192/26 handle="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.404 [INFO][2850] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.194/26] handle="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" host="10.0.0.53" May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.404 [INFO][2850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:13.451893 containerd[1473]: 2025-05-09 00:30:13.404 [INFO][2850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.194/26] IPv6=[] ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" HandleID="k8s-pod-network.f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.453031 containerd[1473]: 2025-05-09 00:30:13.411 [INFO][2835] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9829a1f8-fe60-429f-8fd3-583c0b79c315", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-vhqg4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali301432bf15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:13.453031 containerd[1473]: 2025-05-09 00:30:13.411 [INFO][2835] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.100.194/32] ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.453031 containerd[1473]: 2025-05-09 00:30:13.411 [INFO][2835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali301432bf15e ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.453031 containerd[1473]: 2025-05-09 00:30:13.423 [INFO][2835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.453031 containerd[1473]: 2025-05-09 00:30:13.424 [INFO][2835] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9829a1f8-fe60-429f-8fd3-583c0b79c315", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd", Pod:"nginx-deployment-8587fbcb89-vhqg4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali301432bf15e", MAC:"06:43:c6:b6:55:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:13.453031 containerd[1473]: 2025-05-09 00:30:13.442 [INFO][2835] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhqg4" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:13.505899 containerd[1473]: time="2025-05-09T00:30:13.505219613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:30:13.505899 containerd[1473]: time="2025-05-09T00:30:13.505356584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:30:13.505899 containerd[1473]: time="2025-05-09T00:30:13.505379906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:13.505899 containerd[1473]: time="2025-05-09T00:30:13.505587401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:13.543776 systemd[1]: Started cri-containerd-f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd.scope - libcontainer container f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd. May 9 00:30:13.566899 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:30:13.588982 kubelet[1769]: E0509 00:30:13.587107 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:13.607056 containerd[1473]: time="2025-05-09T00:30:13.606987720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhqg4,Uid:9829a1f8-fe60-429f-8fd3-583c0b79c315,Namespace:default,Attempt:1,} returns sandbox id \"f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd\"" May 9 00:30:14.273844 containerd[1473]: time="2025-05-09T00:30:14.273744861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:14.274551 containerd[1473]: time="2025-05-09T00:30:14.274507527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 9 00:30:14.275967 containerd[1473]: time="2025-05-09T00:30:14.275908286Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:14.278520 containerd[1473]: time="2025-05-09T00:30:14.278449811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:14.279243 containerd[1473]: time="2025-05-09T00:30:14.279174448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.92185234s" May 9 00:30:14.279243 containerd[1473]: time="2025-05-09T00:30:14.279238323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 9 00:30:14.280559 containerd[1473]: time="2025-05-09T00:30:14.280500062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 00:30:14.281867 containerd[1473]: time="2025-05-09T00:30:14.281823420Z" level=info msg="CreateContainer within sandbox \"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 9 00:30:14.303207 containerd[1473]: time="2025-05-09T00:30:14.303126048Z" level=info msg="CreateContainer within sandbox \"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cb4c2d6fcab6619e4266677d0e521360aad3c9443ea818384ad754aa00be882d\"" May 9 00:30:14.303805 containerd[1473]: time="2025-05-09T00:30:14.303768804Z" level=info msg="StartContainer for \"cb4c2d6fcab6619e4266677d0e521360aad3c9443ea818384ad754aa00be882d\"" May 9 00:30:14.349713 systemd[1]: Started cri-containerd-cb4c2d6fcab6619e4266677d0e521360aad3c9443ea818384ad754aa00be882d.scope - libcontainer container cb4c2d6fcab6619e4266677d0e521360aad3c9443ea818384ad754aa00be882d. May 9 00:30:14.391977 containerd[1473]: time="2025-05-09T00:30:14.391918692Z" level=info msg="StartContainer for \"cb4c2d6fcab6619e4266677d0e521360aad3c9443ea818384ad754aa00be882d\" returns successfully" May 9 00:30:14.588327 kubelet[1769]: E0509 00:30:14.588256 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:14.844365 systemd-networkd[1405]: cali301432bf15e: Gained IPv6LL May 9 00:30:15.590240 kubelet[1769]: E0509 00:30:15.588577 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:16.593988 kubelet[1769]: E0509 00:30:16.592597 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:17.593451 kubelet[1769]: E0509 00:30:17.593362 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:17.643511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2585539253.mount: Deactivated successfully. May 9 00:30:18.506868 update_engine[1456]: I20250509 00:30:18.505595 1456 update_attempter.cc:509] Updating boot flags... May 9 00:30:18.593931 kubelet[1769]: E0509 00:30:18.593839 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:18.739469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2978) May 9 00:30:18.797455 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2982) May 9 00:30:18.843500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2982) May 9 00:30:19.070074 containerd[1473]: time="2025-05-09T00:30:19.070013929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:19.071031 containerd[1473]: time="2025-05-09T00:30:19.070976756Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 9 00:30:19.072237 containerd[1473]: time="2025-05-09T00:30:19.072209890Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:19.074990 containerd[1473]: time="2025-05-09T00:30:19.074952789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:19.075948 containerd[1473]: time="2025-05-09T00:30:19.075908500Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 4.795353988s" May 9 00:30:19.075992 containerd[1473]: time="2025-05-09T00:30:19.075951022Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 9 00:30:19.077272 containerd[1473]: time="2025-05-09T00:30:19.077237014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 9 00:30:19.078498 containerd[1473]: time="2025-05-09T00:30:19.078467001Z" level=info msg="CreateContainer within sandbox \"f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 9 00:30:19.092485 containerd[1473]: time="2025-05-09T00:30:19.092415614Z" level=info msg="CreateContainer within sandbox \"f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b34d4949ddcaa987eb3a94ce874e9478de66a1af14fb70602be2adb62a9849b5\"" May 9 00:30:19.093001 containerd[1473]: time="2025-05-09T00:30:19.092958502Z" level=info msg="StartContainer for \"b34d4949ddcaa987eb3a94ce874e9478de66a1af14fb70602be2adb62a9849b5\"" May 9 00:30:19.172605 systemd[1]: Started cri-containerd-b34d4949ddcaa987eb3a94ce874e9478de66a1af14fb70602be2adb62a9849b5.scope - libcontainer container b34d4949ddcaa987eb3a94ce874e9478de66a1af14fb70602be2adb62a9849b5. May 9 00:30:19.385386 containerd[1473]: time="2025-05-09T00:30:19.385322455Z" level=info msg="StartContainer for \"b34d4949ddcaa987eb3a94ce874e9478de66a1af14fb70602be2adb62a9849b5\" returns successfully" May 9 00:30:19.594444 kubelet[1769]: E0509 00:30:19.594374 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:20.595285 kubelet[1769]: E0509 00:30:20.595208 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:21.050618 containerd[1473]: time="2025-05-09T00:30:21.050402921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:21.069900 containerd[1473]: time="2025-05-09T00:30:21.069778455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 9 00:30:21.086896 containerd[1473]: time="2025-05-09T00:30:21.086829186Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:21.101152 containerd[1473]: time="2025-05-09T00:30:21.101104083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:21.101931 containerd[1473]: time="2025-05-09T00:30:21.101889278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.024620237s" May 9 00:30:21.101966 containerd[1473]: time="2025-05-09T00:30:21.101933032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 9 00:30:21.104501 containerd[1473]: time="2025-05-09T00:30:21.104440800Z" level=info msg="CreateContainer within sandbox \"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 9 00:30:21.123543 containerd[1473]: time="2025-05-09T00:30:21.123476718Z" level=info msg="CreateContainer within sandbox \"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fae7bd920cae5080c065246a4e35e8fb27ff13ef65c25aeabdc30e247fdc7cb6\"" May 9 00:30:21.124276 containerd[1473]: time="2025-05-09T00:30:21.124132836Z" level=info msg="StartContainer for \"fae7bd920cae5080c065246a4e35e8fb27ff13ef65c25aeabdc30e247fdc7cb6\"" May 9 00:30:21.166591 systemd[1]: Started cri-containerd-fae7bd920cae5080c065246a4e35e8fb27ff13ef65c25aeabdc30e247fdc7cb6.scope - libcontainer container fae7bd920cae5080c065246a4e35e8fb27ff13ef65c25aeabdc30e247fdc7cb6. May 9 00:30:21.225946 containerd[1473]: time="2025-05-09T00:30:21.225870637Z" level=info msg="StartContainer for \"fae7bd920cae5080c065246a4e35e8fb27ff13ef65c25aeabdc30e247fdc7cb6\" returns successfully" May 9 00:30:21.295961 kubelet[1769]: E0509 00:30:21.295899 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:21.596141 kubelet[1769]: E0509 00:30:21.596082 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:21.713585 kubelet[1769]: I0509 00:30:21.713522 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-vhqg4" podStartSLOduration=18.245255604 podStartE2EDuration="23.713502417s" podCreationTimestamp="2025-05-09 00:29:58 +0000 UTC" firstStartedPulling="2025-05-09 00:30:13.60876701 +0000 UTC m=+32.862024653" lastFinishedPulling="2025-05-09 00:30:19.077013823 +0000 UTC m=+38.330271466" observedRunningTime="2025-05-09 00:30:19.659621852 +0000 UTC m=+38.912879515" watchObservedRunningTime="2025-05-09 00:30:21.713502417 +0000 UTC m=+40.966760060" May 9 00:30:21.713768 kubelet[1769]: I0509 00:30:21.713674 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nfq8m" podStartSLOduration=29.967478399 podStartE2EDuration="39.713670517s" podCreationTimestamp="2025-05-09 00:29:42 +0000 UTC" firstStartedPulling="2025-05-09 00:30:11.356858088 +0000 UTC m=+30.610115741" lastFinishedPulling="2025-05-09 00:30:21.103050226 +0000 UTC m=+40.356307859" observedRunningTime="2025-05-09 00:30:21.713167161 +0000 UTC m=+40.966424814" watchObservedRunningTime="2025-05-09 00:30:21.713670517 +0000 UTC m=+40.966928160" May 9 00:30:21.860188 kubelet[1769]: I0509 00:30:21.860070 1769 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 9 00:30:21.860188 kubelet[1769]: I0509 00:30:21.860112 1769 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 9 00:30:22.597369 kubelet[1769]: E0509 00:30:22.597295 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:23.521628 systemd[1]: Created slice kubepods-besteffort-pod72ecb0ad_4594_49ec_8875_23b735f97ef3.slice - libcontainer container kubepods-besteffort-pod72ecb0ad_4594_49ec_8875_23b735f97ef3.slice. May 9 00:30:23.545261 kubelet[1769]: I0509 00:30:23.545163 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtdc\" (UniqueName: \"kubernetes.io/projected/72ecb0ad-4594-49ec-8875-23b735f97ef3-kube-api-access-6rtdc\") pod \"nfs-server-provisioner-0\" (UID: \"72ecb0ad-4594-49ec-8875-23b735f97ef3\") " pod="default/nfs-server-provisioner-0" May 9 00:30:23.545261 kubelet[1769]: I0509 00:30:23.545269 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/72ecb0ad-4594-49ec-8875-23b735f97ef3-data\") pod \"nfs-server-provisioner-0\" (UID: \"72ecb0ad-4594-49ec-8875-23b735f97ef3\") " pod="default/nfs-server-provisioner-0" May 9 00:30:23.600121 kubelet[1769]: E0509 00:30:23.600022 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:23.826346 containerd[1473]: time="2025-05-09T00:30:23.826137921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:72ecb0ad-4594-49ec-8875-23b735f97ef3,Namespace:default,Attempt:0,}" May 9 00:30:24.385337 systemd-networkd[1405]: cali60e51b789ff: Link UP May 9 00:30:24.385613 systemd-networkd[1405]: cali60e51b789ff: Gained carrier May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.140 [INFO][3113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 72ecb0ad-4594-49ec-8875-23b735f97ef3 1104 0 2025-05-09 00:30:23 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.53 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.140 [INFO][3113] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.188 [INFO][3128] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" HandleID="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.223 [INFO][3128] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" HandleID="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000507b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-09 00:30:24.18830697 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.223 [INFO][3128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.223 [INFO][3128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.223 [INFO][3128] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.225 [INFO][3128] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.229 [INFO][3128] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.234 [INFO][3128] ipam/ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.236 [INFO][3128] ipam/ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.238 [INFO][3128] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.238 [INFO][3128] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.239 [INFO][3128] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158 May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.357 [INFO][3128] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.373 [INFO][3128] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.100.195/26] block=192.168.100.192/26 handle="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.373 [INFO][3128] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.195/26] handle="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" host="10.0.0.53" May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.373 [INFO][3128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:24.420080 containerd[1473]: 2025-05-09 00:30:24.373 [INFO][3128] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.195/26] IPv6=[] ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" HandleID="k8s-pod-network.13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.421133 containerd[1473]: 2025-05-09 00:30:24.377 [INFO][3113] cni-plugin/k8s.go 386: Populated endpoint ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"72ecb0ad-4594-49ec-8875-23b735f97ef3", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:24.421133 containerd[1473]: 2025-05-09 00:30:24.380 [INFO][3113] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.100.195/32] ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.421133 containerd[1473]: 2025-05-09 00:30:24.380 [INFO][3113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.421133 containerd[1473]: 2025-05-09 00:30:24.382 [INFO][3113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.421335 containerd[1473]: 2025-05-09 00:30:24.383 [INFO][3113] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"72ecb0ad-4594-49ec-8875-23b735f97ef3", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"2e:31:91:99:3c:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:24.421335 containerd[1473]: 2025-05-09 00:30:24.415 [INFO][3113] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" May 9 00:30:24.601214 kubelet[1769]: E0509 00:30:24.601070 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:24.641237 containerd[1473]: time="2025-05-09T00:30:24.639584103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:30:24.641237 containerd[1473]: time="2025-05-09T00:30:24.640477727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:30:24.641237 containerd[1473]: time="2025-05-09T00:30:24.640494245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:24.641237 containerd[1473]: time="2025-05-09T00:30:24.640607076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:24.681770 systemd[1]: Started cri-containerd-13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158.scope - libcontainer container 13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158. May 9 00:30:24.703718 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:30:24.746808 containerd[1473]: time="2025-05-09T00:30:24.746578968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:72ecb0ad-4594-49ec-8875-23b735f97ef3,Namespace:default,Attempt:0,} returns sandbox id \"13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158\"" May 9 00:30:24.749584 containerd[1473]: time="2025-05-09T00:30:24.749529854Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 9 00:30:25.467680 systemd-networkd[1405]: cali60e51b789ff: Gained IPv6LL May 9 00:30:25.603858 kubelet[1769]: E0509 00:30:25.601793 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:26.602001 kubelet[1769]: E0509 00:30:26.601947 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:26.726330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821746419.mount: Deactivated successfully. May 9 00:30:27.603449 kubelet[1769]: E0509 00:30:27.602653 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:28.603378 kubelet[1769]: E0509 00:30:28.603330 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:29.604296 kubelet[1769]: E0509 00:30:29.604236 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:29.916651 containerd[1473]: time="2025-05-09T00:30:29.916403494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:29.917594 containerd[1473]: time="2025-05-09T00:30:29.917482103Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" May 9 00:30:29.919581 containerd[1473]: time="2025-05-09T00:30:29.919501194Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:29.924639 containerd[1473]: time="2025-05-09T00:30:29.924547989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:29.926275 containerd[1473]: time="2025-05-09T00:30:29.926217041Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.17664013s" May 9 00:30:29.926348 containerd[1473]: time="2025-05-09T00:30:29.926282506Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 9 00:30:29.928874 containerd[1473]: time="2025-05-09T00:30:29.928831486Z" level=info msg="CreateContainer within sandbox \"13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 9 00:30:30.068387 containerd[1473]: time="2025-05-09T00:30:30.068311005Z" level=info msg="CreateContainer within sandbox \"13a14b45152232225b50548a250f3c043dcb96514a64d510827c115f5eff0158\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"54ddcc5782e85761e53bf22c1684cd9266e9dde8bb9f07c99b930974d3506a8e\"" May 9 00:30:30.069029 containerd[1473]: time="2025-05-09T00:30:30.068999517Z" level=info msg="StartContainer for \"54ddcc5782e85761e53bf22c1684cd9266e9dde8bb9f07c99b930974d3506a8e\"" May 9 00:30:30.103551 systemd[1]: Started cri-containerd-54ddcc5782e85761e53bf22c1684cd9266e9dde8bb9f07c99b930974d3506a8e.scope - libcontainer container 54ddcc5782e85761e53bf22c1684cd9266e9dde8bb9f07c99b930974d3506a8e. May 9 00:30:30.132768 containerd[1473]: time="2025-05-09T00:30:30.132710358Z" level=info msg="StartContainer for \"54ddcc5782e85761e53bf22c1684cd9266e9dde8bb9f07c99b930974d3506a8e\" returns successfully" May 9 00:30:30.604635 kubelet[1769]: E0509 00:30:30.604567 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:30.677666 kubelet[1769]: I0509 00:30:30.677601 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.4992920290000002 podStartE2EDuration="7.677578556s" podCreationTimestamp="2025-05-09 00:30:23 +0000 UTC" firstStartedPulling="2025-05-09 00:30:24.748914593 +0000 UTC m=+44.002172236" lastFinishedPulling="2025-05-09 00:30:29.92720112 +0000 UTC m=+49.180458763" observedRunningTime="2025-05-09 00:30:30.677537375 +0000 UTC m=+49.930795018" watchObservedRunningTime="2025-05-09 00:30:30.677578556 +0000 UTC m=+49.930836199" May 9 00:30:30.940820 kubelet[1769]: E0509 00:30:30.940673 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:30:31.605267 kubelet[1769]: E0509 00:30:31.605182 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:32.609307 kubelet[1769]: E0509 00:30:32.609175 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:33.609969 kubelet[1769]: E0509 00:30:33.609795 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:34.610911 kubelet[1769]: E0509 00:30:34.610723 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:35.611751 kubelet[1769]: E0509 00:30:35.611684 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:36.612668 kubelet[1769]: E0509 00:30:36.612577 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:37.613206 kubelet[1769]: E0509 00:30:37.613125 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:38.614662 kubelet[1769]: E0509 00:30:38.614560 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:39.615221 kubelet[1769]: E0509 00:30:39.615044 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:39.690360 systemd[1]: Created slice kubepods-besteffort-pod8e27659d_e9ce_484e_a036_44bea46d8df1.slice - libcontainer container kubepods-besteffort-pod8e27659d_e9ce_484e_a036_44bea46d8df1.slice. May 9 00:30:39.859710 kubelet[1769]: I0509 00:30:39.859608 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7854\" (UniqueName: \"kubernetes.io/projected/8e27659d-e9ce-484e-a036-44bea46d8df1-kube-api-access-s7854\") pod \"test-pod-1\" (UID: \"8e27659d-e9ce-484e-a036-44bea46d8df1\") " pod="default/test-pod-1" May 9 00:30:39.859710 kubelet[1769]: I0509 00:30:39.859696 1769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8be43597-1398-45a5-b1e1-8bdfe2de65ac\" (UniqueName: \"kubernetes.io/nfs/8e27659d-e9ce-484e-a036-44bea46d8df1-pvc-8be43597-1398-45a5-b1e1-8bdfe2de65ac\") pod \"test-pod-1\" (UID: \"8e27659d-e9ce-484e-a036-44bea46d8df1\") " pod="default/test-pod-1" May 9 00:30:39.993783 kernel: FS-Cache: Loaded May 9 00:30:40.094838 kernel: RPC: Registered named UNIX socket transport module. May 9 00:30:40.095021 kernel: RPC: Registered udp transport module. May 9 00:30:40.095049 kernel: RPC: Registered tcp transport module. May 9 00:30:40.096106 kernel: RPC: Registered tcp-with-tls transport module. May 9 00:30:40.096162 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 9 00:30:40.445621 kernel: NFS: Registering the id_resolver key type May 9 00:30:40.445824 kernel: Key type id_resolver registered May 9 00:30:40.445857 kernel: Key type id_legacy registered May 9 00:30:40.511275 nfsidmap[3340]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 00:30:40.522983 nfsidmap[3343]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 00:30:40.594502 containerd[1473]: time="2025-05-09T00:30:40.594364442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8e27659d-e9ce-484e-a036-44bea46d8df1,Namespace:default,Attempt:0,}" May 9 00:30:40.615387 kubelet[1769]: E0509 00:30:40.615268 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:40.773857 systemd-networkd[1405]: cali5ec59c6bf6e: Link UP May 9 00:30:40.775219 systemd-networkd[1405]: cali5ec59c6bf6e: Gained carrier May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.654 [INFO][3346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-test--pod--1-eth0 default 8e27659d-e9ce-484e-a036-44bea46d8df1 1188 0 2025-05-09 00:30:23 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.655 [INFO][3346] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.691 [INFO][3361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" HandleID="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Workload="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.702 [INFO][3361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" HandleID="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Workload="10.0.0.53-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027fac0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"test-pod-1", "timestamp":"2025-05-09 00:30:40.691944962 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.702 [INFO][3361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.702 [INFO][3361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.702 [INFO][3361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.704 [INFO][3361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.708 [INFO][3361] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.713 [INFO][3361] ipam/ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.716 [INFO][3361] ipam/ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.720 [INFO][3361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.720 [INFO][3361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.726 [INFO][3361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.737 [INFO][3361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.765 [INFO][3361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.100.196/26] block=192.168.100.192/26 handle="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.766 [INFO][3361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.196/26] handle="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" host="10.0.0.53" May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.768 [INFO][3361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:40.789211 containerd[1473]: 2025-05-09 00:30:40.768 [INFO][3361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.196/26] IPv6=[] ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" HandleID="k8s-pod-network.542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Workload="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.790034 containerd[1473]: 2025-05-09 00:30:40.771 [INFO][3346] cni-plugin/k8s.go 386: Populated endpoint ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"8e27659d-e9ce-484e-a036-44bea46d8df1", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:40.790034 containerd[1473]: 2025-05-09 00:30:40.771 [INFO][3346] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.100.196/32] ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.790034 containerd[1473]: 2025-05-09 00:30:40.771 [INFO][3346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.790034 containerd[1473]: 2025-05-09 00:30:40.774 [INFO][3346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.790034 containerd[1473]: 2025-05-09 00:30:40.774 [INFO][3346] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"8e27659d-e9ce-484e-a036-44bea46d8df1", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:bf:28:4f:25:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:40.790034 containerd[1473]: 2025-05-09 00:30:40.786 [INFO][3346] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" May 9 00:30:40.814608 containerd[1473]: time="2025-05-09T00:30:40.814312644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:30:40.814608 containerd[1473]: time="2025-05-09T00:30:40.814374243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:30:40.814608 containerd[1473]: time="2025-05-09T00:30:40.814385898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:40.814608 containerd[1473]: time="2025-05-09T00:30:40.814491181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:30:40.837635 systemd[1]: Started cri-containerd-542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e.scope - libcontainer container 542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e. May 9 00:30:40.852832 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:30:40.878888 containerd[1473]: time="2025-05-09T00:30:40.878830099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8e27659d-e9ce-484e-a036-44bea46d8df1,Namespace:default,Attempt:0,} returns sandbox id \"542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e\"" May 9 00:30:40.880775 containerd[1473]: time="2025-05-09T00:30:40.880727813Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 00:30:41.258961 containerd[1473]: time="2025-05-09T00:30:41.258683878Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:41.260122 containerd[1473]: time="2025-05-09T00:30:41.260045753Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 9 00:30:41.264970 containerd[1473]: time="2025-05-09T00:30:41.264809331Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 383.973971ms" May 9 00:30:41.264970 containerd[1473]: time="2025-05-09T00:30:41.264867062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 9 00:30:41.267990 containerd[1473]: time="2025-05-09T00:30:41.267914502Z" level=info msg="CreateContainer within sandbox \"542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 9 00:30:41.295774 kubelet[1769]: E0509 00:30:41.295683 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:41.302707 containerd[1473]: time="2025-05-09T00:30:41.302609169Z" level=info msg="CreateContainer within sandbox \"542dd39fcfa4f1eff6adf0854cc25d3d67be0a2ff40ca56727ef87e61a58d07e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8a3c134c3a2b1fead6cc9283c4164d6cdeb814a4b2b5b473234283693e90aab1\"" May 9 00:30:41.303936 containerd[1473]: time="2025-05-09T00:30:41.303871825Z" level=info msg="StartContainer for \"8a3c134c3a2b1fead6cc9283c4164d6cdeb814a4b2b5b473234283693e90aab1\"" May 9 00:30:41.316937 containerd[1473]: time="2025-05-09T00:30:41.316853738Z" level=info msg="StopPodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\"" May 9 00:30:41.350702 systemd[1]: Started cri-containerd-8a3c134c3a2b1fead6cc9283c4164d6cdeb814a4b2b5b473234283693e90aab1.scope - libcontainer container 8a3c134c3a2b1fead6cc9283c4164d6cdeb814a4b2b5b473234283693e90aab1. May 9 00:30:41.403128 containerd[1473]: time="2025-05-09T00:30:41.403008886Z" level=info msg="StartContainer for \"8a3c134c3a2b1fead6cc9283c4164d6cdeb814a4b2b5b473234283693e90aab1\" returns successfully" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.377 [WARNING][3449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--nfq8m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"026f136c-abea-42e4-91de-cf798cfb70e0", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5", Pod:"csi-node-driver-nfq8m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic865ee88b10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.377 [INFO][3449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.377 [INFO][3449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" iface="eth0" netns="" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.377 [INFO][3449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.377 [INFO][3449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.406 [INFO][3469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.406 [INFO][3469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.406 [INFO][3469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.416 [WARNING][3469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.416 [INFO][3469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.419 [INFO][3469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:41.424662 containerd[1473]: 2025-05-09 00:30:41.421 [INFO][3449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.425407 containerd[1473]: time="2025-05-09T00:30:41.424716860Z" level=info msg="TearDown network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" successfully" May 9 00:30:41.425407 containerd[1473]: time="2025-05-09T00:30:41.424752976Z" level=info msg="StopPodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" returns successfully" May 9 00:30:41.425870 containerd[1473]: time="2025-05-09T00:30:41.425809528Z" level=info msg="RemovePodSandbox for \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\"" May 9 00:30:41.425933 containerd[1473]: time="2025-05-09T00:30:41.425877021Z" level=info msg="Forcibly stopping sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\"" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.467 [WARNING][3521] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--nfq8m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"026f136c-abea-42e4-91de-cf798cfb70e0", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"8e2bfcb6e058541033398b8b06ce51793b96c2e9ef48f8b5f526df03711eb0a5", Pod:"csi-node-driver-nfq8m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic865ee88b10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.468 [INFO][3521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.468 [INFO][3521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" iface="eth0" netns="" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.468 [INFO][3521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.468 [INFO][3521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.495 [INFO][3533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.496 [INFO][3533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.496 [INFO][3533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.505 [WARNING][3533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.505 [INFO][3533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" HandleID="k8s-pod-network.5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" Workload="10.0.0.53-k8s-csi--node--driver--nfq8m-eth0" May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.507 [INFO][3533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:41.513130 containerd[1473]: 2025-05-09 00:30:41.509 [INFO][3521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731" May 9 00:30:41.513130 containerd[1473]: time="2025-05-09T00:30:41.513081655Z" level=info msg="TearDown network for sandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" successfully" May 9 00:30:41.520988 containerd[1473]: time="2025-05-09T00:30:41.520845413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:41.520988 containerd[1473]: time="2025-05-09T00:30:41.520960485Z" level=info msg="RemovePodSandbox \"5f867eaca54685d3db81e82689be948ca3b2420ec0c4370e5d5b6c96c070c731\" returns successfully" May 9 00:30:41.521907 containerd[1473]: time="2025-05-09T00:30:41.521825053Z" level=info msg="StopPodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\"" May 9 00:30:41.616192 kubelet[1769]: E0509 00:30:41.616074 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.591 [WARNING][3555] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9829a1f8-fe60-429f-8fd3-583c0b79c315", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd", Pod:"nginx-deployment-8587fbcb89-vhqg4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali301432bf15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.592 [INFO][3555] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.592 [INFO][3555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" iface="eth0" netns="" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.592 [INFO][3555] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.592 [INFO][3555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.619 [INFO][3563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.619 [INFO][3563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.619 [INFO][3563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.631 [WARNING][3563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.632 [INFO][3563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.634 [INFO][3563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:41.641966 containerd[1473]: 2025-05-09 00:30:41.637 [INFO][3555] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.642980 containerd[1473]: time="2025-05-09T00:30:41.642017025Z" level=info msg="TearDown network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" successfully" May 9 00:30:41.642980 containerd[1473]: time="2025-05-09T00:30:41.642064024Z" level=info msg="StopPodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" returns successfully" May 9 00:30:41.642980 containerd[1473]: time="2025-05-09T00:30:41.642868536Z" level=info msg="RemovePodSandbox for \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\"" May 9 00:30:41.642980 containerd[1473]: time="2025-05-09T00:30:41.642912829Z" level=info msg="Forcibly stopping sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\"" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.697 [WARNING][3587] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9829a1f8-fe60-429f-8fd3-583c0b79c315", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"f1bfb40de591e3622e1a4f40badc57d48b0e977d5ac53e338f26534ba28fc5bd", Pod:"nginx-deployment-8587fbcb89-vhqg4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali301432bf15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.697 [INFO][3587] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.697 [INFO][3587] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" iface="eth0" netns="" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.697 [INFO][3587] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.697 [INFO][3587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.730 [INFO][3596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.730 [INFO][3596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.730 [INFO][3596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.739 [WARNING][3596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.739 [INFO][3596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" HandleID="k8s-pod-network.d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" Workload="10.0.0.53-k8s-nginx--deployment--8587fbcb89--vhqg4-eth0" May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.742 [INFO][3596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:41.748286 containerd[1473]: 2025-05-09 00:30:41.745 [INFO][3587] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c" May 9 00:30:41.748980 containerd[1473]: time="2025-05-09T00:30:41.748351121Z" level=info msg="TearDown network for sandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" successfully" May 9 00:30:41.752816 containerd[1473]: time="2025-05-09T00:30:41.752710270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:41.752816 containerd[1473]: time="2025-05-09T00:30:41.752800048Z" level=info msg="RemovePodSandbox \"d946ffdf7338b0cfdb526834caa465ab5a4003c551451193615c911249bb2d4c\" returns successfully" May 9 00:30:42.616954 kubelet[1769]: E0509 00:30:42.616829 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:42.619643 systemd-networkd[1405]: cali5ec59c6bf6e: Gained IPv6LL May 9 00:30:43.617164 kubelet[1769]: E0509 00:30:43.617020 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:44.617743 kubelet[1769]: E0509 00:30:44.617655 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:45.618974 kubelet[1769]: E0509 00:30:45.618871 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:30:46.619823 kubelet[1769]: E0509 00:30:46.619760 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"