May 13 00:20:44.898914 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:20:44.898942 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:20:44.898957 kernel: BIOS-provided physical RAM map: May 13 00:20:44.898966 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:20:44.898974 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:20:44.898982 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:20:44.898992 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:20:44.899001 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:20:44.899009 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:20:44.899018 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:20:44.899030 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 13 00:20:44.899038 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 13 00:20:44.899052 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 13 00:20:44.899061 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 13 00:20:44.899075 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:20:44.899084 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:20:44.899097 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:20:44.899107 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:20:44.899116 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:20:44.899125 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:20:44.899134 kernel: NX (Execute Disable) protection: active May 13 00:20:44.899143 kernel: APIC: Static calls initialized May 13 00:20:44.899152 kernel: efi: EFI v2.7 by EDK II May 13 00:20:44.899162 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 13 00:20:44.899171 kernel: SMBIOS 2.8 present. May 13 00:20:44.899180 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 13 00:20:44.899190 kernel: Hypervisor detected: KVM May 13 00:20:44.899202 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:20:44.899211 kernel: kvm-clock: using sched offset of 4598394333 cycles May 13 00:20:44.899220 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:20:44.899230 kernel: tsc: Detected 2794.748 MHz processor May 13 00:20:44.899240 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:20:44.899250 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:20:44.899259 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 13 00:20:44.899277 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 00:20:44.899287 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:20:44.899300 kernel: Using GB pages for direct mapping May 13 00:20:44.899310 kernel: Secure boot disabled May 13 00:20:44.899320 kernel: ACPI: Early table checksum verification disabled May 13 00:20:44.899329 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 00:20:44.899344 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 00:20:44.899354 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:44.899364 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:44.899377 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 00:20:44.899387 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:44.899401 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:44.899411 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:44.899420 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:44.899430 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 00:20:44.899439 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 00:20:44.899452 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 00:20:44.899462 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 00:20:44.899471 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 00:20:44.899481 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 00:20:44.899490 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 00:20:44.899500 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 00:20:44.899510 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 00:20:44.899520 kernel: No NUMA configuration found May 13 00:20:44.899533 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 13 00:20:44.899546 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 13 00:20:44.899556 kernel: Zone ranges: May 13 00:20:44.899566 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:20:44.899575 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 13 00:20:44.899584 kernel: Normal empty May 13 00:20:44.899594 kernel: Movable zone start for each node May 13 00:20:44.899603 kernel: Early memory node ranges May 13 00:20:44.899613 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 00:20:44.899622 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 00:20:44.899632 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 00:20:44.899645 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 13 00:20:44.899654 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 13 00:20:44.899664 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 13 00:20:44.899677 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 13 00:20:44.899686 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:20:44.899696 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 00:20:44.899706 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 00:20:44.899715 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:20:44.899725 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 13 00:20:44.899738 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 00:20:44.899748 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 13 00:20:44.899758 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:20:44.899768 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:20:44.899806 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:20:44.899833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:20:44.899843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:20:44.899867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:20:44.899878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:20:44.899892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:20:44.899902 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:20:44.899912 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:20:44.899926 kernel: TSC deadline timer available May 13 00:20:44.899937 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:20:44.899947 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 00:20:44.899956 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:20:44.899966 kernel: kvm-guest: setup PV sched yield May 13 00:20:44.899976 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 13 00:20:44.899990 kernel: Booting paravirtualized kernel on KVM May 13 00:20:44.900000 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:20:44.900011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 00:20:44.900020 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 00:20:44.900030 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 00:20:44.900040 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:20:44.900049 kernel: kvm-guest: PV spinlocks enabled May 13 00:20:44.900059 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:20:44.900071 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:20:44.900090 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:20:44.900099 kernel: random: crng init done May 13 00:20:44.900109 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:20:44.900119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:20:44.900129 kernel: Fallback order for Node 0: 0 May 13 00:20:44.900139 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 13 00:20:44.900148 kernel: Policy zone: DMA32 May 13 00:20:44.900158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:20:44.900168 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 13 00:20:44.900182 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:20:44.900191 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:20:44.900200 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:20:44.900210 kernel: Dynamic Preempt: voluntary May 13 00:20:44.900231 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:20:44.900245 kernel: rcu: RCU event tracing is enabled. May 13 00:20:44.900256 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:20:44.900276 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:20:44.900288 kernel: Rude variant of Tasks RCU enabled. May 13 00:20:44.900299 kernel: Tracing variant of Tasks RCU enabled. May 13 00:20:44.900310 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:20:44.900324 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:20:44.900335 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:20:44.900349 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:20:44.900360 kernel: Console: colour dummy device 80x25 May 13 00:20:44.900370 kernel: printk: console [ttyS0] enabled May 13 00:20:44.900384 kernel: ACPI: Core revision 20230628 May 13 00:20:44.900395 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:20:44.900406 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:20:44.900417 kernel: x2apic enabled May 13 00:20:44.900428 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:20:44.900438 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 00:20:44.900449 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 00:20:44.900460 kernel: kvm-guest: setup PV IPIs May 13 00:20:44.900471 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:20:44.900485 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:20:44.900496 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:20:44.900507 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:20:44.900519 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:20:44.900531 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:20:44.900544 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:20:44.900554 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:20:44.900565 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:20:44.900576 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:20:44.900590 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:20:44.900600 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:20:44.900612 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:20:44.900622 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 00:20:44.900637 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 00:20:44.900648 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 00:20:44.900659 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:20:44.900670 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:20:44.900684 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:20:44.900695 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:20:44.900706 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:20:44.900717 kernel: Freeing SMP alternatives memory: 32K May 13 00:20:44.900727 kernel: pid_max: default: 32768 minimum: 301 May 13 00:20:44.900738 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:20:44.900748 kernel: landlock: Up and running. May 13 00:20:44.900759 kernel: SELinux: Initializing. May 13 00:20:44.900769 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:20:44.900783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:20:44.900794 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:20:44.900804 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:20:44.900815 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:20:44.900825 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:20:44.900836 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:20:44.900846 kernel: ... version: 0 May 13 00:20:44.900880 kernel: ... bit width: 48 May 13 00:20:44.900891 kernel: ... generic registers: 6 May 13 00:20:44.900905 kernel: ... value mask: 0000ffffffffffff May 13 00:20:44.900935 kernel: ... max period: 00007fffffffffff May 13 00:20:44.900955 kernel: ... fixed-purpose events: 0 May 13 00:20:44.900965 kernel: ... event mask: 000000000000003f May 13 00:20:44.900975 kernel: signal: max sigframe size: 1776 May 13 00:20:44.900985 kernel: rcu: Hierarchical SRCU implementation. May 13 00:20:44.900996 kernel: rcu: Max phase no-delay instances is 400. May 13 00:20:44.901006 kernel: smp: Bringing up secondary CPUs ... May 13 00:20:44.901021 kernel: smpboot: x86: Booting SMP configuration: May 13 00:20:44.901035 kernel: .... node #0, CPUs: #1 #2 #3 May 13 00:20:44.901046 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:20:44.901056 kernel: smpboot: Max logical packages: 1 May 13 00:20:44.901067 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:20:44.901077 kernel: devtmpfs: initialized May 13 00:20:44.901087 kernel: x86/mm: Memory block size: 128MB May 13 00:20:44.901098 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 00:20:44.901109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 00:20:44.901119 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 13 00:20:44.901133 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 00:20:44.901143 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 00:20:44.901153 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:20:44.901164 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:20:44.901174 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:20:44.901184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:20:44.901195 kernel: audit: initializing netlink subsys (disabled) May 13 00:20:44.901205 kernel: audit: type=2000 audit(1747095644.039:1): state=initialized audit_enabled=0 res=1 May 13 00:20:44.901216 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:20:44.901230 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:20:44.901241 kernel: cpuidle: using governor menu May 13 00:20:44.901251 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:20:44.901262 kernel: dca service started, version 1.12.1 May 13 00:20:44.901283 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:20:44.901294 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 00:20:44.901304 kernel: PCI: Using configuration type 1 for base access May 13 00:20:44.901315 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:20:44.901326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:20:44.901340 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:20:44.901351 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:20:44.901361 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:20:44.901372 kernel: ACPI: Added _OSI(Module Device) May 13 00:20:44.901382 kernel: ACPI: Added _OSI(Processor Device) May 13 00:20:44.901392 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:20:44.901402 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:20:44.901412 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:20:44.901422 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:20:44.901435 kernel: ACPI: Interpreter enabled May 13 00:20:44.901446 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:20:44.901456 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:20:44.901466 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:20:44.901477 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:20:44.901487 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:20:44.901497 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:20:44.901707 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:20:44.901894 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:20:44.902047 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:20:44.902062 kernel: PCI host bridge to bus 0000:00 May 13 00:20:44.902219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:20:44.902373 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:20:44.902514 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:20:44.902651 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:20:44.902800 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:20:44.902989 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 13 00:20:44.903137 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:20:44.903327 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:20:44.903498 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:20:44.903657 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 00:20:44.903824 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 00:20:44.903996 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 00:20:44.904144 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 00:20:44.904303 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:20:44.904496 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:20:44.904659 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 00:20:44.904809 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 00:20:44.904989 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 13 00:20:44.905145 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:20:44.905306 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 00:20:44.905454 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 00:20:44.905602 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 13 00:20:44.905758 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:20:44.905937 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 00:20:44.906107 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 00:20:44.906256 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 13 00:20:44.906417 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 00:20:44.906580 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:20:44.906728 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:20:44.906900 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:20:44.907054 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 00:20:44.907209 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 00:20:44.907368 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:20:44.907490 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 00:20:44.907501 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:20:44.907509 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:20:44.907517 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:20:44.907524 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:20:44.907536 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:20:44.907543 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:20:44.907551 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:20:44.907558 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:20:44.907566 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:20:44.907574 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:20:44.907581 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:20:44.907589 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:20:44.907596 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:20:44.907606 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:20:44.907613 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:20:44.907621 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:20:44.907629 kernel: iommu: Default domain type: Translated May 13 00:20:44.907636 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:20:44.907644 kernel: efivars: Registered efivars operations May 13 00:20:44.907651 kernel: PCI: Using ACPI for IRQ routing May 13 00:20:44.907659 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:20:44.907666 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 00:20:44.907674 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 13 00:20:44.907684 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 13 00:20:44.907691 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 13 00:20:44.907814 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:20:44.908025 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:20:44.908147 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:20:44.908157 kernel: vgaarb: loaded May 13 00:20:44.908165 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:20:44.908173 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:20:44.908186 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:20:44.908194 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:20:44.908202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:20:44.908209 kernel: pnp: PnP ACPI init May 13 00:20:44.908348 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:20:44.908360 kernel: pnp: PnP ACPI: found 6 devices May 13 00:20:44.908369 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:20:44.908376 kernel: NET: Registered PF_INET protocol family May 13 00:20:44.908387 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:20:44.908395 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:20:44.908403 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:20:44.908411 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:20:44.908418 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:20:44.908426 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:20:44.908434 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:20:44.908442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:20:44.908449 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:20:44.908460 kernel: NET: Registered PF_XDP protocol family May 13 00:20:44.908583 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 00:20:44.908703 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 00:20:44.908813 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:20:44.908972 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:20:44.909084 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:20:44.909195 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:20:44.909316 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:20:44.909432 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 13 00:20:44.909442 kernel: PCI: CLS 0 bytes, default 64 May 13 00:20:44.909449 kernel: Initialise system trusted keyrings May 13 00:20:44.909457 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:20:44.909465 kernel: Key type asymmetric registered May 13 00:20:44.909472 kernel: Asymmetric key parser 'x509' registered May 13 00:20:44.909480 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:20:44.909488 kernel: io scheduler mq-deadline registered May 13 00:20:44.909495 kernel: io scheduler kyber registered May 13 00:20:44.909506 kernel: io scheduler bfq registered May 13 00:20:44.909514 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:20:44.909522 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:20:44.909530 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:20:44.909538 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:20:44.909545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:20:44.909553 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:20:44.909561 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:20:44.909568 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:20:44.909579 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:20:44.909734 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:20:44.909746 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:20:44.909871 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:20:44.909987 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:20:44 UTC (1747095644) May 13 00:20:44.910098 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:20:44.910109 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 00:20:44.910116 kernel: efifb: probing for efifb May 13 00:20:44.910128 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 13 00:20:44.910136 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 13 00:20:44.910144 kernel: efifb: scrolling: redraw May 13 00:20:44.910152 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 13 00:20:44.910159 kernel: Console: switching to colour frame buffer device 100x37 May 13 00:20:44.910167 kernel: fb0: EFI VGA frame buffer device May 13 00:20:44.910192 kernel: pstore: Using crash dump compression: deflate May 13 00:20:44.910202 kernel: pstore: Registered efi_pstore as persistent store backend May 13 00:20:44.910210 kernel: NET: Registered PF_INET6 protocol family May 13 00:20:44.910220 kernel: Segment Routing with IPv6 May 13 00:20:44.910228 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:20:44.910236 kernel: NET: Registered PF_PACKET protocol family May 13 00:20:44.910244 kernel: Key type dns_resolver registered May 13 00:20:44.910251 kernel: IPI shorthand broadcast: enabled May 13 00:20:44.910259 kernel: sched_clock: Marking stable (831002980, 115110018)->(968410181, -22297183) May 13 00:20:44.910275 kernel: registered taskstats version 1 May 13 00:20:44.910284 kernel: Loading compiled-in X.509 certificates May 13 00:20:44.910293 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:20:44.910303 kernel: Key type .fscrypt registered May 13 00:20:44.910311 kernel: Key type fscrypt-provisioning registered May 13 00:20:44.910319 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:20:44.910326 kernel: ima: Allocated hash algorithm: sha1 May 13 00:20:44.910334 kernel: ima: No architecture policies found May 13 00:20:44.910342 kernel: clk: Disabling unused clocks May 13 00:20:44.910350 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:20:44.910358 kernel: Write protecting the kernel read-only data: 36864k May 13 00:20:44.910366 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:20:44.910377 kernel: Run /init as init process May 13 00:20:44.910385 kernel: with arguments: May 13 00:20:44.910392 kernel: /init May 13 00:20:44.910400 kernel: with environment: May 13 00:20:44.910408 kernel: HOME=/ May 13 00:20:44.910416 kernel: TERM=linux May 13 00:20:44.910424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:20:44.910435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:20:44.910447 systemd[1]: Detected virtualization kvm. May 13 00:20:44.910456 systemd[1]: Detected architecture x86-64. May 13 00:20:44.910464 systemd[1]: Running in initrd. May 13 00:20:44.910472 systemd[1]: No hostname configured, using default hostname. May 13 00:20:44.910485 systemd[1]: Hostname set to . May 13 00:20:44.910494 systemd[1]: Initializing machine ID from VM UUID. May 13 00:20:44.910502 systemd[1]: Queued start job for default target initrd.target. May 13 00:20:44.910512 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:20:44.910522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:20:44.910532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:20:44.910542 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:20:44.910550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:20:44.910561 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:20:44.910572 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:20:44.910581 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:20:44.910589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:20:44.910597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:20:44.910606 systemd[1]: Reached target paths.target - Path Units. May 13 00:20:44.910614 systemd[1]: Reached target slices.target - Slice Units. May 13 00:20:44.910625 systemd[1]: Reached target swap.target - Swaps. May 13 00:20:44.910633 systemd[1]: Reached target timers.target - Timer Units. May 13 00:20:44.910642 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:20:44.910650 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:20:44.910658 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:20:44.910667 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:20:44.910675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:20:44.910684 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:20:44.910692 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:20:44.910703 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:20:44.910711 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:20:44.910720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:20:44.910728 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:20:44.910737 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:20:44.910745 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:20:44.910754 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:20:44.910762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:44.910773 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:20:44.910781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:20:44.910790 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:20:44.910799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:20:44.910826 systemd-journald[190]: Collecting audit messages is disabled. May 13 00:20:44.910848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:20:44.910882 systemd-journald[190]: Journal started May 13 00:20:44.910903 systemd-journald[190]: Runtime Journal (/run/log/journal/ed6f50483729425dbf6544b180df7cdd) is 6.0M, max 48.3M, 42.2M free. May 13 00:20:44.898669 systemd-modules-load[193]: Inserted module 'overlay' May 13 00:20:44.914949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:20:44.917958 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:20:44.918473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:44.922077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:20:44.927885 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:20:44.930776 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:20:44.931892 kernel: Bridge firewalling registered May 13 00:20:44.931474 systemd-modules-load[193]: Inserted module 'br_netfilter' May 13 00:20:44.931539 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:20:44.935101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:20:44.939220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:20:44.947692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:20:44.950537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:44.952282 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:20:44.958571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:20:44.961309 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:20:44.971282 dracut-cmdline[226]: dracut-dracut-053 May 13 00:20:44.975186 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:20:44.994780 systemd-resolved[231]: Positive Trust Anchors: May 13 00:20:44.994795 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:20:44.994826 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:20:44.997366 systemd-resolved[231]: Defaulting to hostname 'linux'. May 13 00:20:44.998498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:20:45.004311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:20:45.086904 kernel: SCSI subsystem initialized May 13 00:20:45.098896 kernel: Loading iSCSI transport class v2.0-870. May 13 00:20:45.111906 kernel: iscsi: registered transport (tcp) May 13 00:20:45.134894 kernel: iscsi: registered transport (qla4xxx) May 13 00:20:45.134961 kernel: QLogic iSCSI HBA Driver May 13 00:20:45.191167 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:20:45.199071 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:20:45.223900 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:20:45.223973 kernel: device-mapper: uevent: version 1.0.3 May 13 00:20:45.225462 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:20:45.268900 kernel: raid6: avx2x4 gen() 29709 MB/s May 13 00:20:45.285885 kernel: raid6: avx2x2 gen() 30404 MB/s May 13 00:20:45.302994 kernel: raid6: avx2x1 gen() 25958 MB/s May 13 00:20:45.303022 kernel: raid6: using algorithm avx2x2 gen() 30404 MB/s May 13 00:20:45.321209 kernel: raid6: .... xor() 18352 MB/s, rmw enabled May 13 00:20:45.321238 kernel: raid6: using avx2x2 recovery algorithm May 13 00:20:45.342896 kernel: xor: automatically using best checksumming function avx May 13 00:20:45.503892 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:20:45.517235 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:20:45.532012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:20:45.544006 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 13 00:20:45.548597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:20:45.555974 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:20:45.571987 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 13 00:20:45.608058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:20:45.621020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:20:45.684604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:20:45.697115 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:20:45.712434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:20:45.714701 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:20:45.715165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:20:45.715497 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:20:45.725941 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 00:20:45.725993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:20:45.734886 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:20:45.740483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:20:45.747886 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:20:45.747938 kernel: GPT:9289727 != 19775487 May 13 00:20:45.747949 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:20:45.747959 kernel: GPT:9289727 != 19775487 May 13 00:20:45.747968 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:20:45.747979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:45.748870 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:20:45.749533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:20:45.749701 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:45.753428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:20:45.756006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:45.756329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:45.758875 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:45.766876 kernel: libata version 3.00 loaded. May 13 00:20:45.771386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:45.778868 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:20:45.779080 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:20:45.779094 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:20:45.779263 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:20:45.779412 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) May 13 00:20:45.784900 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) May 13 00:20:45.784946 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:20:45.784958 kernel: AES CTR mode by8 optimization enabled May 13 00:20:45.787384 kernel: scsi host0: ahci May 13 00:20:45.795567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:20:45.798516 kernel: scsi host1: ahci May 13 00:20:45.798705 kernel: scsi host2: ahci May 13 00:20:45.798916 kernel: scsi host3: ahci May 13 00:20:45.800286 kernel: scsi host4: ahci May 13 00:20:45.800460 kernel: scsi host5: ahci May 13 00:20:45.800608 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 00:20:45.802115 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 00:20:45.802143 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 00:20:45.803655 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 00:20:45.803683 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 00:20:45.804972 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 00:20:45.810794 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:20:45.821281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:20:45.829205 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:20:45.829671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:20:45.840992 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:20:45.841424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:45.841482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:45.841793 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:45.843015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:45.857063 disk-uuid[555]: Primary Header is updated. May 13 00:20:45.857063 disk-uuid[555]: Secondary Entries is updated. May 13 00:20:45.857063 disk-uuid[555]: Secondary Header is updated. May 13 00:20:45.861142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:45.864229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:45.866872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:45.871034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:20:45.893067 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:46.128894 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:20:46.128988 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:20:46.129880 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:20:46.129895 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:20:46.130498 kernel: ata3.00: applying bridge limits May 13 00:20:46.131875 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:20:46.131887 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:20:46.132874 kernel: ata3.00: configured for UDMA/100 May 13 00:20:46.133876 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:20:46.135883 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:20:46.194909 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:20:46.195245 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:20:46.207880 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:20:46.872890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:46.873105 disk-uuid[557]: The operation has completed successfully. May 13 00:20:46.904819 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:20:46.904959 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:20:46.928060 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:20:46.933719 sh[595]: Success May 13 00:20:46.946879 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:20:46.982458 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:20:46.996392 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:20:47.000929 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:20:47.010475 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:20:47.010505 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:47.010516 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:20:47.011512 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:20:47.012897 kernel: BTRFS info (device dm-0): using free space tree May 13 00:20:47.016953 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:20:47.017928 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:20:47.027049 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:20:47.029108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:20:47.037450 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:47.037520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:47.037536 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:47.040887 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:20:47.050604 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:20:47.052469 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:47.139752 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:20:47.151976 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:20:47.174515 systemd-networkd[774]: lo: Link UP May 13 00:20:47.174527 systemd-networkd[774]: lo: Gained carrier May 13 00:20:47.176102 systemd-networkd[774]: Enumeration completed May 13 00:20:47.176327 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:20:47.176503 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:47.176507 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:20:47.177527 systemd-networkd[774]: eth0: Link UP May 13 00:20:47.177531 systemd-networkd[774]: eth0: Gained carrier May 13 00:20:47.177538 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:47.178693 systemd[1]: Reached target network.target - Network. May 13 00:20:47.193896 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:20:47.523673 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.35 May 13 00:20:47.523693 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 13 00:20:48.094165 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:20:48.120143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:20:48.277167 ignition[779]: Ignition 2.19.0 May 13 00:20:48.283237 ignition[779]: Stage: fetch-offline May 13 00:20:48.283315 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:48.283329 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:48.283462 ignition[779]: parsed url from cmdline: "" May 13 00:20:48.283471 ignition[779]: no config URL provided May 13 00:20:48.283478 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:20:48.283489 ignition[779]: no config at "/usr/lib/ignition/user.ign" May 13 00:20:48.283532 ignition[779]: op(1): [started] loading QEMU firmware config module May 13 00:20:48.283544 ignition[779]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:20:48.319671 ignition[779]: op(1): [finished] loading QEMU firmware config module May 13 00:20:48.367369 ignition[779]: parsing config with SHA512: 5641ff2c8d6d76c79a231a2c87687a2ab28b05e8d6c851949c4e54bdf69a1abe1adf733c652b4e3b4307d68f2fef00dbd945321e2846b11356708643639251e7 May 13 00:20:48.380040 unknown[779]: fetched base config from "system" May 13 00:20:48.380059 unknown[779]: fetched user config from "qemu" May 13 00:20:48.381872 ignition[779]: fetch-offline: fetch-offline passed May 13 00:20:48.381979 ignition[779]: Ignition finished successfully May 13 00:20:48.385794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:20:48.391722 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:20:48.404597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:20:48.434093 ignition[790]: Ignition 2.19.0 May 13 00:20:48.434108 ignition[790]: Stage: kargs May 13 00:20:48.438428 ignition[790]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:48.438453 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:48.439544 ignition[790]: kargs: kargs passed May 13 00:20:48.439604 ignition[790]: Ignition finished successfully May 13 00:20:48.448879 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:20:48.461958 systemd-networkd[774]: eth0: Gained IPv6LL May 13 00:20:48.468138 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:20:48.490370 ignition[798]: Ignition 2.19.0 May 13 00:20:48.490384 ignition[798]: Stage: disks May 13 00:20:48.490578 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:48.490590 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:48.491622 ignition[798]: disks: disks passed May 13 00:20:48.494129 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:20:48.491671 ignition[798]: Ignition finished successfully May 13 00:20:48.496839 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:20:48.499920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:20:48.502991 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:20:48.505199 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:20:48.507117 systemd[1]: Reached target basic.target - Basic System. May 13 00:20:48.521185 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:20:48.538690 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:20:48.548996 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:20:48.564124 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:20:48.796880 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:20:48.801083 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:20:48.810938 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:20:48.831266 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:20:48.843863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:20:48.850021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:20:48.850086 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:20:48.850119 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:20:48.854525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:20:48.861372 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:20:48.874997 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) May 13 00:20:48.881888 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:48.881953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:48.881980 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:48.895946 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:20:48.906454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:20:48.977482 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:20:48.995050 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory May 13 00:20:49.026238 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:20:49.043220 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:20:49.230583 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:20:49.246058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:20:49.252539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:20:49.268922 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:49.269461 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:20:49.299882 ignition[930]: INFO : Ignition 2.19.0 May 13 00:20:49.299882 ignition[930]: INFO : Stage: mount May 13 00:20:49.299882 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:49.299882 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:49.304212 ignition[930]: INFO : mount: mount passed May 13 00:20:49.304212 ignition[930]: INFO : Ignition finished successfully May 13 00:20:49.305557 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:20:49.318075 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:20:49.318695 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:20:49.815320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:20:49.837250 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) May 13 00:20:49.840876 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:49.840910 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:49.842997 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:49.860113 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:20:49.863434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:20:49.913730 ignition[961]: INFO : Ignition 2.19.0 May 13 00:20:49.918988 ignition[961]: INFO : Stage: files May 13 00:20:49.918988 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:49.918988 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:49.927994 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 13 00:20:49.927994 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:20:49.927994 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:20:49.941849 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:20:49.941849 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:20:49.941849 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:20:49.941849 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:20:49.941849 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 00:20:49.935270 unknown[961]: wrote ssh authorized keys file for user: core May 13 00:20:50.026882 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:20:50.277177 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:20:50.277177 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:20:50.281315 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 00:20:50.791346 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:20:51.216165 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:20:51.216165 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:20:51.222464 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:20:51.334581 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:20:51.340114 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:20:51.389484 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:20:51.389484 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 00:20:51.389484 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:20:51.389484 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:20:51.389484 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:20:51.389484 ignition[961]: INFO : files: files passed May 13 00:20:51.389484 ignition[961]: INFO : Ignition finished successfully May 13 00:20:51.343197 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:20:51.414068 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:20:51.428161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:20:51.430448 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:20:51.430585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:20:51.443595 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:20:51.456385 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:20:51.462074 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:20:51.459558 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:20:51.464737 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:51.464737 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:51.467877 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:51.487279 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:20:51.487440 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:20:51.490197 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:20:51.491849 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:20:51.493999 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:20:51.494990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:20:51.530041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:20:51.553150 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:20:51.563301 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:20:51.572480 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:20:51.572890 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:20:51.628793 ignition[1017]: INFO : Ignition 2.19.0 May 13 00:20:51.628793 ignition[1017]: INFO : Stage: umount May 13 00:20:51.628793 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:51.628793 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:51.628793 ignition[1017]: INFO : umount: umount passed May 13 00:20:51.628793 ignition[1017]: INFO : Ignition finished successfully May 13 00:20:51.573278 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:20:51.573397 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:20:51.574345 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:20:51.574755 systemd[1]: Stopped target basic.target - Basic System. May 13 00:20:51.575384 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:20:51.575783 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:20:51.576391 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:20:51.576791 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:20:51.577209 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:20:51.577626 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:20:51.578227 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:20:51.578632 systemd[1]: Stopped target swap.target - Swaps. May 13 00:20:51.579211 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:20:51.579360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:20:51.580255 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:20:51.580683 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:20:51.581245 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:20:51.581390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:20:51.581839 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:20:51.581965 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:20:51.582649 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:20:51.582781 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:20:51.583374 systemd[1]: Stopped target paths.target - Path Units. May 13 00:20:51.583668 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:20:51.586896 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:20:51.587372 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:20:51.587761 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:20:51.588188 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:20:51.588301 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:20:51.588815 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:20:51.588939 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:20:51.589446 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:20:51.589578 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:20:51.590259 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:20:51.590382 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:20:51.591641 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:20:51.592825 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:20:51.593220 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:20:51.593366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:20:51.593746 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:20:51.593897 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:20:51.597973 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:20:51.598122 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:20:51.613506 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:20:51.613650 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:20:51.614368 systemd[1]: Stopped target network.target - Network. May 13 00:20:51.614702 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:20:51.614763 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:20:51.615260 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:20:51.615316 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:20:51.615628 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:20:51.615680 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:20:51.616161 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:20:51.616215 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:20:51.616629 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:20:51.617385 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:20:51.621583 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:20:51.626622 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:20:51.626801 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:20:51.630238 systemd-networkd[774]: eth0: DHCPv6 lease lost May 13 00:20:51.630641 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:20:51.630815 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:20:51.632828 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:20:51.632971 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:20:51.635663 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:20:51.635746 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:20:51.637513 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:20:51.637567 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:20:51.828911 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). May 13 00:20:51.665003 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:20:51.666958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:20:51.667038 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:20:51.669433 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:20:51.669485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:20:51.671489 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:20:51.671538 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:20:51.672890 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:20:51.672942 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:20:51.675383 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:20:51.697241 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:20:51.697466 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:20:51.701254 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:20:51.701352 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:20:51.702866 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:20:51.702914 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:20:51.704943 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:20:51.704997 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:20:51.707340 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:20:51.707391 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:20:51.709382 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:20:51.709432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:51.728067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:20:51.729822 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:20:51.729917 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:20:51.732168 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:20:51.732231 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:20:51.749371 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:20:51.749462 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:20:51.751939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:51.751991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:51.753801 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:20:51.753947 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:20:51.756203 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:20:51.756308 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:20:51.759551 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:20:51.761544 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:20:51.773936 systemd[1]: Switching root. May 13 00:20:51.931065 systemd-journald[190]: Journal stopped May 13 00:20:53.702132 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:20:53.702222 kernel: SELinux: policy capability open_perms=1 May 13 00:20:53.702239 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:20:53.702253 kernel: SELinux: policy capability always_check_network=0 May 13 00:20:53.702276 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:20:53.702290 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:20:53.702306 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:20:53.702325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:20:53.702342 kernel: audit: type=1403 audit(1747095652.795:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:20:53.702371 systemd[1]: Successfully loaded SELinux policy in 45.338ms. May 13 00:20:53.702396 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.128ms. May 13 00:20:53.702420 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:20:53.702437 systemd[1]: Detected virtualization kvm. May 13 00:20:53.702453 systemd[1]: Detected architecture x86-64. May 13 00:20:53.702468 systemd[1]: Detected first boot. May 13 00:20:53.702488 systemd[1]: Initializing machine ID from VM UUID. May 13 00:20:53.702504 zram_generator::config[1061]: No configuration found. May 13 00:20:53.702520 systemd[1]: Populated /etc with preset unit settings. May 13 00:20:53.702535 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:20:53.702552 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:20:53.702568 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:20:53.702585 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:20:53.702601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:20:53.702617 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:20:53.702636 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:20:53.702653 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:20:53.702669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:20:53.702685 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:20:53.702701 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:20:53.702719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:20:53.702735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:20:53.702751 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:20:53.702770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:20:53.702787 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:20:53.702804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:20:53.702820 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:20:53.702836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:20:53.702852 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:20:53.702884 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:20:53.702900 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:20:53.702917 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:20:53.702937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:20:53.702959 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:20:53.702975 systemd[1]: Reached target slices.target - Slice Units. May 13 00:20:53.702992 systemd[1]: Reached target swap.target - Swaps. May 13 00:20:53.703008 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:20:53.703033 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:20:53.703050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:20:53.703066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:20:53.703085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:20:53.703104 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:20:53.703120 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:20:53.703137 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:20:53.703152 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:20:53.703169 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:53.703185 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:20:53.703201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:20:53.703217 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:20:53.703238 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:20:53.703255 systemd[1]: Reached target machines.target - Containers. May 13 00:20:53.703271 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:20:53.703287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:53.703303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:20:53.703319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:20:53.703335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:53.703351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:20:53.703371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:53.703387 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:20:53.703404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:53.703421 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:20:53.703437 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:20:53.703455 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:20:53.703471 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:20:53.703487 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:20:53.703503 kernel: fuse: init (API version 7.39) May 13 00:20:53.703522 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:20:53.703538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:20:53.703554 kernel: loop: module loaded May 13 00:20:53.703569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:20:53.703585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:20:53.703602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:20:53.703618 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:20:53.703634 systemd[1]: Stopped verity-setup.service. May 13 00:20:53.703650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:53.703692 systemd-journald[1138]: Collecting audit messages is disabled. May 13 00:20:53.703728 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:20:53.703744 systemd-journald[1138]: Journal started May 13 00:20:53.703775 systemd-journald[1138]: Runtime Journal (/run/log/journal/ed6f50483729425dbf6544b180df7cdd) is 6.0M, max 48.3M, 42.2M free. May 13 00:20:53.461960 systemd[1]: Queued start job for default target multi-user.target. May 13 00:20:53.483610 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:20:53.484145 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:20:53.706209 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:20:53.707163 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:20:53.708718 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:20:53.710112 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:20:53.711668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:20:53.713167 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:20:53.714737 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:20:53.716586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:20:53.718493 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:20:53.718713 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:20:53.720610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:53.720873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:53.722690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:53.722932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:53.725154 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:20:53.725424 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:20:53.726887 kernel: ACPI: bus type drm_connector registered May 13 00:20:53.727809 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:53.728067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:53.729913 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:20:53.730145 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:20:53.732585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:20:53.734291 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:20:53.736115 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:20:53.751918 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:20:53.758968 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:20:53.761780 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:20:53.763161 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:20:53.763195 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:20:53.765493 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:20:53.768651 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:20:53.771226 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:20:53.772710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:53.777681 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:20:53.786449 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:20:53.788000 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:53.790291 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:20:53.791725 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:20:53.794226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:20:53.799975 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:20:53.805062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:20:53.808787 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:20:53.810378 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:20:53.814641 systemd-journald[1138]: Time spent on flushing to /var/log/journal/ed6f50483729425dbf6544b180df7cdd is 27.243ms for 999 entries. May 13 00:20:53.814641 systemd-journald[1138]: System Journal (/var/log/journal/ed6f50483729425dbf6544b180df7cdd) is 8.0M, max 195.6M, 187.6M free. May 13 00:20:53.872690 systemd-journald[1138]: Received client request to flush runtime journal. May 13 00:20:53.872760 kernel: loop0: detected capacity change from 0 to 140768 May 13 00:20:53.813069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:20:53.816456 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:20:53.825204 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:20:53.835071 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:20:53.844006 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:20:53.851951 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:20:53.854896 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 13 00:20:53.854910 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 13 00:20:53.874216 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:20:53.876148 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:20:53.880145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:20:53.884334 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:20:53.895210 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:20:53.897626 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:20:53.898624 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:20:53.903332 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:20:53.915913 kernel: loop1: detected capacity change from 0 to 218376 May 13 00:20:53.924102 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:20:53.939053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:20:53.957131 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 13 00:20:53.957521 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 13 00:20:53.957897 kernel: loop2: detected capacity change from 0 to 142488 May 13 00:20:53.964273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:20:53.989885 kernel: loop3: detected capacity change from 0 to 140768 May 13 00:20:54.002889 kernel: loop4: detected capacity change from 0 to 218376 May 13 00:20:54.014915 kernel: loop5: detected capacity change from 0 to 142488 May 13 00:20:54.025795 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:20:54.026399 (sd-merge)[1202]: Merged extensions into '/usr'. May 13 00:20:54.033502 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:20:54.033519 systemd[1]: Reloading... May 13 00:20:54.094887 zram_generator::config[1224]: No configuration found. May 13 00:20:54.134550 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:20:54.226110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:54.275016 systemd[1]: Reloading finished in 240 ms. May 13 00:20:54.317848 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:20:54.319453 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:20:54.332020 systemd[1]: Starting ensure-sysext.service... May 13 00:20:54.334224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:20:54.340187 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... May 13 00:20:54.340197 systemd[1]: Reloading... May 13 00:20:54.358312 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:20:54.358683 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:20:54.359686 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:20:54.360043 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 13 00:20:54.360125 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 13 00:20:54.366750 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:20:54.366766 systemd-tmpfiles[1266]: Skipping /boot May 13 00:20:54.390836 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:20:54.390869 systemd-tmpfiles[1266]: Skipping /boot May 13 00:20:54.409917 zram_generator::config[1295]: No configuration found. May 13 00:20:54.512557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:54.562692 systemd[1]: Reloading finished in 222 ms. May 13 00:20:54.581604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:20:54.592376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:20:54.601591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:54.603290 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:20:54.605870 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:20:54.607181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:54.608505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:54.612846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:54.616311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:54.618038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:54.620040 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:20:54.624654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:20:54.631239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:20:54.633902 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:20:54.635146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:54.637259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:54.637445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:54.639434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:54.639619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:54.641515 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:54.641688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:54.645703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:54.646677 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:20:54.658067 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:20:54.663752 systemd-udevd[1345]: Using default interface naming scheme 'v255'. May 13 00:20:54.665174 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:20:54.669229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:54.669598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:54.671974 augenrules[1359]: No rules May 13 00:20:54.679274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:54.686773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:54.689660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:54.690945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:54.696831 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:20:54.698099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:54.699633 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:20:54.702102 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:20:54.704154 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:20:54.706552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:54.707037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:54.709682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:54.709898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:54.711869 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:54.712055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:54.714160 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:20:54.718661 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:20:54.732724 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:20:54.752076 systemd[1]: Finished ensure-sysext.service. May 13 00:20:54.756659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:54.756811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:54.766055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:54.770633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:20:54.774057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:54.777804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1394) May 13 00:20:54.778458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:54.779617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:54.782986 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:20:54.786710 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:20:54.787982 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:20:54.788025 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:54.788557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:54.790913 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:54.792729 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:20:54.793095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:20:54.803277 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:54.803470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:54.804953 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 00:20:54.809248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:54.809453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:54.820690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:54.821024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:20:54.833136 systemd-resolved[1344]: Positive Trust Anchors: May 13 00:20:54.833162 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:20:54.833195 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:20:54.840040 systemd-resolved[1344]: Defaulting to hostname 'linux'. May 13 00:20:54.841824 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:20:54.843269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:20:54.846033 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:20:54.850885 kernel: ACPI: button: Power Button [PWRF] May 13 00:20:54.862262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:20:54.874063 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:20:54.876817 systemd-networkd[1406]: lo: Link UP May 13 00:20:54.876832 systemd-networkd[1406]: lo: Gained carrier May 13 00:20:54.879578 systemd-networkd[1406]: Enumeration completed May 13 00:20:54.879659 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:20:54.880507 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:54.880512 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:20:54.881064 systemd[1]: Reached target network.target - Network. May 13 00:20:54.881646 systemd-networkd[1406]: eth0: Link UP May 13 00:20:54.881660 systemd-networkd[1406]: eth0: Gained carrier May 13 00:20:54.881672 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:54.888885 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 00:20:54.889216 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:20:54.891677 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:20:54.891928 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:20:54.893142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:20:54.894646 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:20:54.896055 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:20:54.899930 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:20:54.899939 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:20:54.900778 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. May 13 00:20:54.903017 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:20:54.903113 systemd-timesyncd[1407]: Initial clock synchronization to Tue 2025-05-13 00:20:55.246137 UTC. May 13 00:20:54.912790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:20:54.994884 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:20:54.995154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:55.008630 kernel: kvm_amd: TSC scaling supported May 13 00:20:55.008676 kernel: kvm_amd: Nested Virtualization enabled May 13 00:20:55.008694 kernel: kvm_amd: Nested Paging enabled May 13 00:20:55.008716 kernel: kvm_amd: LBR virtualization supported May 13 00:20:55.009834 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 00:20:55.009856 kernel: kvm_amd: Virtual GIF supported May 13 00:20:55.010263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:55.010746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:55.030214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:55.030914 kernel: EDAC MC: Ver: 3.0.0 May 13 00:20:55.066431 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:20:55.086097 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:20:55.087995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:55.095771 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:20:55.132181 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:20:55.133931 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:20:55.135141 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:20:55.136417 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:20:55.137774 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:20:55.139354 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:20:55.140646 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:20:55.141993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:20:55.143319 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:20:55.143358 systemd[1]: Reached target paths.target - Path Units. May 13 00:20:55.144330 systemd[1]: Reached target timers.target - Timer Units. May 13 00:20:55.146015 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:20:55.149112 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:20:55.157787 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:20:55.160507 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:20:55.162225 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:20:55.163455 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:20:55.164468 systemd[1]: Reached target basic.target - Basic System. May 13 00:20:55.165495 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:20:55.165532 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:20:55.166688 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:20:55.169015 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:20:55.171987 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:20:55.174029 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:20:55.178290 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:20:55.179420 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:20:55.182564 jq[1446]: false May 13 00:20:55.183065 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:20:55.187621 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:20:55.200100 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:20:55.202450 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:20:55.206523 extend-filesystems[1447]: Found loop3 May 13 00:20:55.208158 extend-filesystems[1447]: Found loop4 May 13 00:20:55.208158 extend-filesystems[1447]: Found loop5 May 13 00:20:55.208158 extend-filesystems[1447]: Found sr0 May 13 00:20:55.208158 extend-filesystems[1447]: Found vda May 13 00:20:55.208158 extend-filesystems[1447]: Found vda1 May 13 00:20:55.208158 extend-filesystems[1447]: Found vda2 May 13 00:20:55.208158 extend-filesystems[1447]: Found vda3 May 13 00:20:55.208158 extend-filesystems[1447]: Found usr May 13 00:20:55.208158 extend-filesystems[1447]: Found vda4 May 13 00:20:55.208158 extend-filesystems[1447]: Found vda6 May 13 00:20:55.208158 extend-filesystems[1447]: Found vda7 May 13 00:20:55.208158 extend-filesystems[1447]: Found vda9 May 13 00:20:55.208158 extend-filesystems[1447]: Checking size of /dev/vda9 May 13 00:20:55.215546 dbus-daemon[1445]: [system] SELinux support is enabled May 13 00:20:55.209685 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:20:55.212126 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:20:55.212545 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:20:55.214078 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:20:55.217000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:20:55.219143 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:20:55.225855 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:20:55.229778 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:20:55.230079 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:20:55.230437 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:20:55.230772 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:20:55.234675 jq[1461]: true May 13 00:20:55.236527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:20:55.236759 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:20:55.240243 extend-filesystems[1447]: Resized partition /dev/vda9 May 13 00:20:55.255768 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1395) May 13 00:20:55.257597 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:20:55.258972 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) May 13 00:20:55.269398 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:20:55.269441 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:20:55.272932 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:20:55.274091 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:20:55.274121 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:20:55.279691 tar[1468]: linux-amd64/LICENSE May 13 00:20:55.279691 tar[1468]: linux-amd64/helm May 13 00:20:55.283096 update_engine[1460]: I20250513 00:20:55.280344 1460 main.cc:92] Flatcar Update Engine starting May 13 00:20:55.283001 systemd[1]: Started update-engine.service - Update Engine. May 13 00:20:55.285105 update_engine[1460]: I20250513 00:20:55.284548 1460 update_check_scheduler.cc:74] Next update check in 6m34s May 13 00:20:55.286226 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:20:55.301225 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:20:55.301257 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:20:55.306065 jq[1470]: true May 13 00:20:55.311057 systemd-logind[1458]: New seat seat0. May 13 00:20:55.317504 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:20:55.432291 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:20:55.439196 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:20:55.949755 containerd[1471]: time="2025-05-13T00:20:55.949626791Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:20:55.950089 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:20:55.950089 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:20:55.950089 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:20:55.953384 extend-filesystems[1447]: Resized filesystem in /dev/vda9 May 13 00:20:55.952891 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:20:55.956150 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:20:55.953199 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:20:55.980794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:20:55.981959 containerd[1471]: time="2025-05-13T00:20:55.981666679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.983543 containerd[1471]: time="2025-05-13T00:20:55.983499525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:55.983543 containerd[1471]: time="2025-05-13T00:20:55.983529659Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:20:55.983543 containerd[1471]: time="2025-05-13T00:20:55.983545081Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:20:55.983777 containerd[1471]: time="2025-05-13T00:20:55.983747492Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:20:55.983777 containerd[1471]: time="2025-05-13T00:20:55.983768838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.983868 containerd[1471]: time="2025-05-13T00:20:55.983843264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:55.983868 containerd[1471]: time="2025-05-13T00:20:55.983861549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.984134 containerd[1471]: time="2025-05-13T00:20:55.984098190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:55.984134 containerd[1471]: time="2025-05-13T00:20:55.984122534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.984184 containerd[1471]: time="2025-05-13T00:20:55.984136703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:55.984184 containerd[1471]: time="2025-05-13T00:20:55.984147789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.984270 containerd[1471]: time="2025-05-13T00:20:55.984248587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.984526 containerd[1471]: time="2025-05-13T00:20:55.984496000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:55.984647 containerd[1471]: time="2025-05-13T00:20:55.984617099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:55.984647 containerd[1471]: time="2025-05-13T00:20:55.984635520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:20:55.984760 containerd[1471]: time="2025-05-13T00:20:55.984732452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:20:55.984813 containerd[1471]: time="2025-05-13T00:20:55.984792741Z" level=info msg="metadata content store policy set" policy=shared May 13 00:20:56.003165 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:20:56.015709 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:20:56.016048 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:20:56.023169 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:20:56.048271 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:20:56.059254 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:20:56.062132 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:20:56.064085 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:20:56.233988 tar[1468]: linux-amd64/README.md May 13 00:20:56.253774 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:20:56.367542 containerd[1471]: time="2025-05-13T00:20:56.367444360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:20:56.367542 containerd[1471]: time="2025-05-13T00:20:56.367553205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:20:56.367697 containerd[1471]: time="2025-05-13T00:20:56.367573672Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:20:56.367697 containerd[1471]: time="2025-05-13T00:20:56.367590054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:20:56.367697 containerd[1471]: time="2025-05-13T00:20:56.367605834Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:20:56.367880 containerd[1471]: time="2025-05-13T00:20:56.367843462Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:20:56.368178 containerd[1471]: time="2025-05-13T00:20:56.368139216Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:20:56.368283 containerd[1471]: time="2025-05-13T00:20:56.368261003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:20:56.368307 containerd[1471]: time="2025-05-13T00:20:56.368280919Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:20:56.368307 containerd[1471]: time="2025-05-13T00:20:56.368296782Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:20:56.368344 containerd[1471]: time="2025-05-13T00:20:56.368311076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368344 containerd[1471]: time="2025-05-13T00:20:56.368324454Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368344 containerd[1471]: time="2025-05-13T00:20:56.368336667Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368412 containerd[1471]: time="2025-05-13T00:20:56.368350774Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368412 containerd[1471]: time="2025-05-13T00:20:56.368365316Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368412 containerd[1471]: time="2025-05-13T00:20:56.368378549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368412 containerd[1471]: time="2025-05-13T00:20:56.368399952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368412 containerd[1471]: time="2025-05-13T00:20:56.368412966Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:20:56.368499 containerd[1471]: time="2025-05-13T00:20:56.368432820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368499 containerd[1471]: time="2025-05-13T00:20:56.368447561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368499 containerd[1471]: time="2025-05-13T00:20:56.368462519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368499 containerd[1471]: time="2025-05-13T00:20:56.368475752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368499 containerd[1471]: time="2025-05-13T00:20:56.368489057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368501718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368514005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368526291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368538661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368551728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368563526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368584482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368597414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368612517Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368632964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368644523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368681 containerd[1471]: time="2025-05-13T00:20:56.368654336Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368709117Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368726539Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368737006Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368748680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368757921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368769906Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368785228Z" level=info msg="NRI interface is disabled by configuration." May 13 00:20:56.368919 containerd[1471]: time="2025-05-13T00:20:56.368795051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:20:56.369128 containerd[1471]: time="2025-05-13T00:20:56.369070994Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:20:56.369128 containerd[1471]: time="2025-05-13T00:20:56.369127345Z" level=info msg="Connect containerd service" May 13 00:20:56.369297 containerd[1471]: time="2025-05-13T00:20:56.369163914Z" level=info msg="using legacy CRI server" May 13 00:20:56.369297 containerd[1471]: time="2025-05-13T00:20:56.369171274Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:20:56.369297 containerd[1471]: time="2025-05-13T00:20:56.369280357Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:20:56.369959 containerd[1471]: time="2025-05-13T00:20:56.369932711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:20:56.370187 containerd[1471]: time="2025-05-13T00:20:56.370105722Z" level=info msg="Start subscribing containerd event" May 13 00:20:56.370239 containerd[1471]: time="2025-05-13T00:20:56.370218975Z" level=info msg="Start recovering state" May 13 00:20:56.370284 containerd[1471]: time="2025-05-13T00:20:56.370260939Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:20:56.370951 containerd[1471]: time="2025-05-13T00:20:56.370441322Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:20:56.370951 containerd[1471]: time="2025-05-13T00:20:56.370557963Z" level=info msg="Start event monitor" May 13 00:20:56.370951 containerd[1471]: time="2025-05-13T00:20:56.370618981Z" level=info msg="Start snapshots syncer" May 13 00:20:56.370951 containerd[1471]: time="2025-05-13T00:20:56.370640581Z" level=info msg="Start cni network conf syncer for default" May 13 00:20:56.370951 containerd[1471]: time="2025-05-13T00:20:56.370667796Z" level=info msg="Start streaming server" May 13 00:20:56.371145 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:20:56.372051 containerd[1471]: time="2025-05-13T00:20:56.372011896Z" level=info msg="containerd successfully booted in 0.496904s" May 13 00:20:56.391337 bash[1499]: Updated "/home/core/.ssh/authorized_keys" May 13 00:20:56.393590 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:20:56.395620 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:20:56.718772 systemd-networkd[1406]: eth0: Gained IPv6LL May 13 00:20:56.722258 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:20:56.724158 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:20:56.735152 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:20:56.737916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:20:56.740266 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:20:56.764144 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:20:56.771413 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:20:56.771683 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:20:56.773415 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:20:57.488556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:20:57.497136 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:20:57.498557 systemd[1]: Startup finished in 963ms (kernel) + 8.086s (initrd) + 4.745s (userspace) = 13.795s. May 13 00:20:57.503304 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:20:57.940591 kubelet[1559]: E0513 00:20:57.940513 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:57.944868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:57.945099 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:57.945488 systemd[1]: kubelet.service: Consumed 1.028s CPU time. May 13 00:20:58.233710 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:20:58.235157 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:39162.service - OpenSSH per-connection server daemon (10.0.0.1:39162). May 13 00:20:58.279839 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 39162 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:58.282016 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:58.290401 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:20:58.301094 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:20:58.303034 systemd-logind[1458]: New session 1 of user core. May 13 00:20:58.313755 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:20:58.325158 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:20:58.328113 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:58.448119 systemd[1576]: Queued start job for default target default.target. May 13 00:20:58.459197 systemd[1576]: Created slice app.slice - User Application Slice. May 13 00:20:58.459222 systemd[1576]: Reached target paths.target - Paths. May 13 00:20:58.459235 systemd[1576]: Reached target timers.target - Timers. May 13 00:20:58.461012 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:20:58.476012 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:20:58.476141 systemd[1576]: Reached target sockets.target - Sockets. May 13 00:20:58.476158 systemd[1576]: Reached target basic.target - Basic System. May 13 00:20:58.476201 systemd[1576]: Reached target default.target - Main User Target. May 13 00:20:58.476234 systemd[1576]: Startup finished in 141ms. May 13 00:20:58.476818 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:20:58.478702 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:20:58.546594 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:39168.service - OpenSSH per-connection server daemon (10.0.0.1:39168). May 13 00:20:58.590846 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 39168 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:58.592670 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:58.597088 systemd-logind[1458]: New session 2 of user core. May 13 00:20:58.608113 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:20:58.666181 sshd[1587]: pam_unix(sshd:session): session closed for user core May 13 00:20:58.673995 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:39168.service: Deactivated successfully. May 13 00:20:58.675846 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:20:58.677693 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. May 13 00:20:58.678973 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:39172.service - OpenSSH per-connection server daemon (10.0.0.1:39172). May 13 00:20:58.680009 systemd-logind[1458]: Removed session 2. May 13 00:20:58.730774 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 39172 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:58.732752 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:58.737757 systemd-logind[1458]: New session 3 of user core. May 13 00:20:58.753200 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:20:58.806261 sshd[1594]: pam_unix(sshd:session): session closed for user core May 13 00:20:58.821510 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:39172.service: Deactivated successfully. May 13 00:20:58.824105 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:20:58.826104 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. May 13 00:20:58.835146 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:39186.service - OpenSSH per-connection server daemon (10.0.0.1:39186). May 13 00:20:58.836261 systemd-logind[1458]: Removed session 3. May 13 00:20:58.871526 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 39186 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:58.873252 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:58.877648 systemd-logind[1458]: New session 4 of user core. May 13 00:20:58.893191 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:20:58.950728 sshd[1601]: pam_unix(sshd:session): session closed for user core May 13 00:20:58.967695 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:39186.service: Deactivated successfully. May 13 00:20:58.969542 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:20:58.971209 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. May 13 00:20:58.983125 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:39202.service - OpenSSH per-connection server daemon (10.0.0.1:39202). May 13 00:20:58.983943 systemd-logind[1458]: Removed session 4. May 13 00:20:59.018248 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 39202 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:59.019969 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:59.023939 systemd-logind[1458]: New session 5 of user core. May 13 00:20:59.034006 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:20:59.094058 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:20:59.094417 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:59.109242 sudo[1611]: pam_unix(sudo:session): session closed for user root May 13 00:20:59.111381 sshd[1608]: pam_unix(sshd:session): session closed for user core May 13 00:20:59.127059 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:39202.service: Deactivated successfully. May 13 00:20:59.129145 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:20:59.130722 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. May 13 00:20:59.132337 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:39216.service - OpenSSH per-connection server daemon (10.0.0.1:39216). May 13 00:20:59.133267 systemd-logind[1458]: Removed session 5. May 13 00:20:59.173001 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 39216 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:59.174760 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:59.179349 systemd-logind[1458]: New session 6 of user core. May 13 00:20:59.184341 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:20:59.242198 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:20:59.242532 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:59.246769 sudo[1620]: pam_unix(sudo:session): session closed for user root May 13 00:20:59.253397 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:20:59.253836 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:59.274199 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:20:59.276058 auditctl[1623]: No rules May 13 00:20:59.277706 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:20:59.278030 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:20:59.280319 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:20:59.313787 augenrules[1641]: No rules May 13 00:20:59.315672 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:20:59.317075 sudo[1619]: pam_unix(sudo:session): session closed for user root May 13 00:20:59.319322 sshd[1616]: pam_unix(sshd:session): session closed for user core May 13 00:20:59.331542 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:39216.service: Deactivated successfully. May 13 00:20:59.334053 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:20:59.336087 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. May 13 00:20:59.345485 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:39232.service - OpenSSH per-connection server daemon (10.0.0.1:39232). May 13 00:20:59.346730 systemd-logind[1458]: Removed session 6. May 13 00:20:59.381494 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 39232 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:20:59.383212 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:59.387610 systemd-logind[1458]: New session 7 of user core. May 13 00:20:59.401055 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:20:59.455732 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:20:59.456100 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:59.739103 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:20:59.739225 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:21:00.014183 dockerd[1671]: time="2025-05-13T00:21:00.014032786Z" level=info msg="Starting up" May 13 00:21:01.719678 systemd[1]: var-lib-docker-metacopy\x2dcheck2293252017-merged.mount: Deactivated successfully. May 13 00:21:01.747612 dockerd[1671]: time="2025-05-13T00:21:01.747554808Z" level=info msg="Loading containers: start." May 13 00:21:02.020902 kernel: Initializing XFRM netlink socket May 13 00:21:02.115462 systemd-networkd[1406]: docker0: Link UP May 13 00:21:02.292513 dockerd[1671]: time="2025-05-13T00:21:02.292454037Z" level=info msg="Loading containers: done." May 13 00:21:02.307430 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1456254051-merged.mount: Deactivated successfully. May 13 00:21:02.375583 dockerd[1671]: time="2025-05-13T00:21:02.375353199Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:21:02.375583 dockerd[1671]: time="2025-05-13T00:21:02.375520214Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:21:02.375747 dockerd[1671]: time="2025-05-13T00:21:02.375654854Z" level=info msg="Daemon has completed initialization" May 13 00:21:02.431597 dockerd[1671]: time="2025-05-13T00:21:02.431496758Z" level=info msg="API listen on /run/docker.sock" May 13 00:21:02.431928 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:21:03.164258 containerd[1471]: time="2025-05-13T00:21:03.164194881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:21:03.933943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703553881.mount: Deactivated successfully. May 13 00:21:04.851023 containerd[1471]: time="2025-05-13T00:21:04.850954812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:04.851937 containerd[1471]: time="2025-05-13T00:21:04.851890034Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 00:21:04.853073 containerd[1471]: time="2025-05-13T00:21:04.853039622Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:04.855841 containerd[1471]: time="2025-05-13T00:21:04.855794192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:04.856949 containerd[1471]: time="2025-05-13T00:21:04.856905644Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.692660733s" May 13 00:21:04.856996 containerd[1471]: time="2025-05-13T00:21:04.856951219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 00:21:04.857511 containerd[1471]: time="2025-05-13T00:21:04.857463106Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:21:06.298823 containerd[1471]: time="2025-05-13T00:21:06.298735748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:06.309244 containerd[1471]: time="2025-05-13T00:21:06.309157319Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 00:21:06.311218 containerd[1471]: time="2025-05-13T00:21:06.311165477Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:06.317066 containerd[1471]: time="2025-05-13T00:21:06.317016028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:06.318265 containerd[1471]: time="2025-05-13T00:21:06.318196332Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.460699818s" May 13 00:21:06.318265 containerd[1471]: time="2025-05-13T00:21:06.318248155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 00:21:06.318922 containerd[1471]: time="2025-05-13T00:21:06.318730671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:21:08.183814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:21:08.194013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:08.356727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:08.361509 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:21:08.404225 kubelet[1890]: E0513 00:21:08.404141 1890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:21:08.410817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:21:08.411059 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:21:09.156483 containerd[1471]: time="2025-05-13T00:21:09.156403034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:09.171637 containerd[1471]: time="2025-05-13T00:21:09.171535318Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 00:21:09.196020 containerd[1471]: time="2025-05-13T00:21:09.195945831Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:09.230939 containerd[1471]: time="2025-05-13T00:21:09.230901474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:09.232158 containerd[1471]: time="2025-05-13T00:21:09.232098744Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.913335025s" May 13 00:21:09.232158 containerd[1471]: time="2025-05-13T00:21:09.232132509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 00:21:09.232715 containerd[1471]: time="2025-05-13T00:21:09.232670578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:21:11.810241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430190285.mount: Deactivated successfully. May 13 00:21:13.133357 containerd[1471]: time="2025-05-13T00:21:13.133269855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:13.135084 containerd[1471]: time="2025-05-13T00:21:13.135018678Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 00:21:13.137376 containerd[1471]: time="2025-05-13T00:21:13.137339671Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:13.140969 containerd[1471]: time="2025-05-13T00:21:13.140917357Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 3.908206597s" May 13 00:21:13.140969 containerd[1471]: time="2025-05-13T00:21:13.140956712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 00:21:13.141419 containerd[1471]: time="2025-05-13T00:21:13.141383046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:13.142016 containerd[1471]: time="2025-05-13T00:21:13.141972284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:21:13.654840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182191064.mount: Deactivated successfully. May 13 00:21:14.565682 containerd[1471]: time="2025-05-13T00:21:14.565607413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:14.580194 containerd[1471]: time="2025-05-13T00:21:14.580118948Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 00:21:14.608744 containerd[1471]: time="2025-05-13T00:21:14.608704930Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:14.642405 containerd[1471]: time="2025-05-13T00:21:14.642351468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:14.643488 containerd[1471]: time="2025-05-13T00:21:14.643451435Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.501438512s" May 13 00:21:14.643488 containerd[1471]: time="2025-05-13T00:21:14.643483785Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 00:21:14.643970 containerd[1471]: time="2025-05-13T00:21:14.643946344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:21:15.276583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817569973.mount: Deactivated successfully. May 13 00:21:15.282931 containerd[1471]: time="2025-05-13T00:21:15.282878319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:15.283625 containerd[1471]: time="2025-05-13T00:21:15.283564624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 00:21:15.284737 containerd[1471]: time="2025-05-13T00:21:15.284702470Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:15.287031 containerd[1471]: time="2025-05-13T00:21:15.286987438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:15.287872 containerd[1471]: time="2025-05-13T00:21:15.287833496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 643.856052ms" May 13 00:21:15.287955 containerd[1471]: time="2025-05-13T00:21:15.287876403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:21:15.288499 containerd[1471]: time="2025-05-13T00:21:15.288388918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:21:16.701498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292207752.mount: Deactivated successfully. May 13 00:21:18.431020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:21:18.445053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:18.636793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:18.642622 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:21:18.680600 kubelet[1990]: E0513 00:21:18.680512 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:21:18.685059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:21:18.685321 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:21:20.777310 containerd[1471]: time="2025-05-13T00:21:20.777227059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:20.820760 containerd[1471]: time="2025-05-13T00:21:20.820669371Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 00:21:20.843196 containerd[1471]: time="2025-05-13T00:21:20.843131465Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:20.874672 containerd[1471]: time="2025-05-13T00:21:20.874579865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:20.876396 containerd[1471]: time="2025-05-13T00:21:20.876298146Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.587866446s" May 13 00:21:20.876396 containerd[1471]: time="2025-05-13T00:21:20.876380064Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 00:21:22.801199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:22.810097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:22.834462 systemd[1]: Reloading requested from client PID 2062 ('systemctl') (unit session-7.scope)... May 13 00:21:22.834480 systemd[1]: Reloading... May 13 00:21:22.967916 zram_generator::config[2107]: No configuration found. May 13 00:21:23.687292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:21:23.766456 systemd[1]: Reloading finished in 931 ms. May 13 00:21:23.825241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:23.832315 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:21:23.832571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:23.845188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:24.002587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:24.009141 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:21:24.051007 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:24.051007 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:21:24.051007 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:24.051529 kubelet[2151]: I0513 00:21:24.051050 2151 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:21:24.406666 kubelet[2151]: I0513 00:21:24.406528 2151 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:21:24.406666 kubelet[2151]: I0513 00:21:24.406563 2151 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:21:24.406812 kubelet[2151]: I0513 00:21:24.406807 2151 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:21:24.427985 kubelet[2151]: E0513 00:21:24.427945 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:24.429562 kubelet[2151]: I0513 00:21:24.429526 2151 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:21:24.436724 kubelet[2151]: E0513 00:21:24.436684 2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:21:24.436724 kubelet[2151]: I0513 00:21:24.436712 2151 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:21:24.442729 kubelet[2151]: I0513 00:21:24.442693 2151 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:21:24.442993 kubelet[2151]: I0513 00:21:24.442934 2151 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:21:24.443143 kubelet[2151]: I0513 00:21:24.442962 2151 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:21:24.443143 kubelet[2151]: I0513 00:21:24.443143 2151 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:21:24.443272 kubelet[2151]: I0513 00:21:24.443152 2151 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:21:24.443313 kubelet[2151]: I0513 00:21:24.443299 2151 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:24.445862 kubelet[2151]: I0513 00:21:24.445824 2151 kubelet.go:446] "Attempting to sync node with API server" May 13 00:21:24.445862 kubelet[2151]: I0513 00:21:24.445844 2151 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:21:24.445921 kubelet[2151]: I0513 00:21:24.445875 2151 kubelet.go:352] "Adding apiserver pod source" May 13 00:21:24.445921 kubelet[2151]: I0513 00:21:24.445886 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:21:24.446916 kubelet[2151]: W0513 00:21:24.446849 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:24.446967 kubelet[2151]: E0513 00:21:24.446942 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:24.447082 kubelet[2151]: W0513 00:21:24.446896 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:24.447082 kubelet[2151]: E0513 00:21:24.447055 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:24.448182 kubelet[2151]: I0513 00:21:24.448159 2151 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:21:24.448545 kubelet[2151]: I0513 00:21:24.448522 2151 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:21:24.448593 kubelet[2151]: W0513 00:21:24.448580 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:21:24.450699 kubelet[2151]: I0513 00:21:24.450673 2151 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:21:24.450746 kubelet[2151]: I0513 00:21:24.450709 2151 server.go:1287] "Started kubelet" May 13 00:21:24.452018 kubelet[2151]: I0513 00:21:24.451956 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:21:24.455670 kubelet[2151]: E0513 00:21:24.455635 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:21:24.456822 kubelet[2151]: I0513 00:21:24.455740 2151 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:21:24.456822 kubelet[2151]: I0513 00:21:24.456112 2151 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:21:24.456822 kubelet[2151]: I0513 00:21:24.456275 2151 reconciler.go:26] "Reconciler: start to sync state" May 13 00:21:24.456822 kubelet[2151]: W0513 00:21:24.456657 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:24.456822 kubelet[2151]: E0513 00:21:24.456692 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:24.457003 kubelet[2151]: I0513 00:21:24.456927 2151 factory.go:221] Registration of the systemd container factory successfully May 13 00:21:24.457088 kubelet[2151]: I0513 00:21:24.457061 2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:21:24.458073 kubelet[2151]: E0513 00:21:24.458044 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" May 13 00:21:24.458211 kubelet[2151]: I0513 00:21:24.458189 2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:21:24.458338 kubelet[2151]: I0513 00:21:24.458307 2151 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:21:24.458560 kubelet[2151]: I0513 00:21:24.458514 2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:21:24.458882 kubelet[2151]: I0513 00:21:24.458852 2151 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:21:24.461219 kubelet[2151]: I0513 00:21:24.461193 2151 server.go:490] "Adding debug handlers to kubelet server" May 13 00:21:24.461412 kubelet[2151]: E0513 00:21:24.459369 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee44bc4d1b83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:21:24.450687875 +0000 UTC m=+0.437107225,LastTimestamp:2025-05-13 00:21:24.450687875 +0000 UTC m=+0.437107225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:21:24.462097 kubelet[2151]: E0513 00:21:24.462072 2151 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:21:24.462196 kubelet[2151]: I0513 00:21:24.462160 2151 factory.go:221] Registration of the containerd container factory successfully May 13 00:21:24.476090 kubelet[2151]: I0513 00:21:24.476047 2151 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:21:24.476090 kubelet[2151]: I0513 00:21:24.476068 2151 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:21:24.476090 kubelet[2151]: I0513 00:21:24.476085 2151 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:24.479254 kubelet[2151]: I0513 00:21:24.479195 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:21:24.480900 kubelet[2151]: I0513 00:21:24.480844 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:21:24.480983 kubelet[2151]: I0513 00:21:24.480934 2151 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:21:24.480983 kubelet[2151]: I0513 00:21:24.480967 2151 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:21:24.480983 kubelet[2151]: I0513 00:21:24.480976 2151 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:21:24.481091 kubelet[2151]: E0513 00:21:24.481024 2151 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:21:24.481400 kubelet[2151]: W0513 00:21:24.481370 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:24.481451 kubelet[2151]: E0513 00:21:24.481404 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:24.556000 kubelet[2151]: E0513 00:21:24.555937 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:21:24.581502 kubelet[2151]: E0513 00:21:24.581445 2151 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:21:24.657015 kubelet[2151]: E0513 00:21:24.656833 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:21:24.659530 kubelet[2151]: E0513 00:21:24.659477 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" May 13 00:21:24.739053 kubelet[2151]: I0513 00:21:24.738991 2151 policy_none.go:49] "None policy: Start" May 13 00:21:24.739053 kubelet[2151]: I0513 00:21:24.739040 2151 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:21:24.739053 kubelet[2151]: I0513 00:21:24.739055 2151 state_mem.go:35] "Initializing new in-memory state store" May 13 00:21:24.747226 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:21:24.757978 kubelet[2151]: E0513 00:21:24.757929 2151 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:21:24.760431 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:21:24.763363 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:21:24.772723 kubelet[2151]: I0513 00:21:24.772682 2151 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:21:24.772925 kubelet[2151]: I0513 00:21:24.772903 2151 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:21:24.772962 kubelet[2151]: I0513 00:21:24.772916 2151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:21:24.773632 kubelet[2151]: I0513 00:21:24.773243 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:21:24.774007 kubelet[2151]: E0513 00:21:24.773920 2151 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:21:24.774007 kubelet[2151]: E0513 00:21:24.773955 2151 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:21:24.789609 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:21:24.808275 kubelet[2151]: E0513 00:21:24.808220 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:24.811415 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:21:24.813077 kubelet[2151]: E0513 00:21:24.813045 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:24.814760 systemd[1]: Created slice kubepods-burstable-pod694d815de2e87bd86f93ae61010e79fd.slice - libcontainer container kubepods-burstable-pod694d815de2e87bd86f93ae61010e79fd.slice. May 13 00:21:24.816301 kubelet[2151]: E0513 00:21:24.816269 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:24.858700 kubelet[2151]: I0513 00:21:24.858651 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/694d815de2e87bd86f93ae61010e79fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"694d815de2e87bd86f93ae61010e79fd\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:24.858700 kubelet[2151]: I0513 00:21:24.858690 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/694d815de2e87bd86f93ae61010e79fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"694d815de2e87bd86f93ae61010e79fd\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:24.858780 kubelet[2151]: I0513 00:21:24.858711 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:24.858780 kubelet[2151]: I0513 00:21:24.858730 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:24.858780 kubelet[2151]: I0513 00:21:24.858747 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:24.858780 kubelet[2151]: I0513 00:21:24.858767 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:21:24.858780 kubelet[2151]: I0513 00:21:24.858782 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/694d815de2e87bd86f93ae61010e79fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"694d815de2e87bd86f93ae61010e79fd\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:24.858927 kubelet[2151]: I0513 00:21:24.858799 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:24.858927 kubelet[2151]: I0513 00:21:24.858814 2151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:24.874850 kubelet[2151]: I0513 00:21:24.874813 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:24.875274 kubelet[2151]: E0513 00:21:24.875237 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 13 00:21:25.059904 kubelet[2151]: E0513 00:21:25.059839 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" May 13 00:21:25.077075 kubelet[2151]: I0513 00:21:25.077043 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:25.077451 kubelet[2151]: E0513 00:21:25.077413 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 13 00:21:25.109720 kubelet[2151]: E0513 00:21:25.109675 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.110315 containerd[1471]: time="2025-05-13T00:21:25.110271746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:21:25.113586 kubelet[2151]: E0513 00:21:25.113537 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.113962 containerd[1471]: time="2025-05-13T00:21:25.113925636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:21:25.117193 kubelet[2151]: E0513 00:21:25.117151 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.117557 containerd[1471]: time="2025-05-13T00:21:25.117517943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:694d815de2e87bd86f93ae61010e79fd,Namespace:kube-system,Attempt:0,}" May 13 00:21:25.347782 kubelet[2151]: W0513 00:21:25.347630 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:25.347782 kubelet[2151]: E0513 00:21:25.347683 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:25.370840 kubelet[2151]: W0513 00:21:25.370801 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:25.370918 kubelet[2151]: E0513 00:21:25.370843 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:25.377505 kubelet[2151]: W0513 00:21:25.377435 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:25.377505 kubelet[2151]: E0513 00:21:25.377496 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:25.479945 kubelet[2151]: I0513 00:21:25.479899 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:25.486429 kubelet[2151]: E0513 00:21:25.486363 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 13 00:21:25.860628 kubelet[2151]: E0513 00:21:25.860580 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" May 13 00:21:26.019280 kubelet[2151]: W0513 00:21:26.019212 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:26.019280 kubelet[2151]: E0513 00:21:26.019274 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:26.288439 kubelet[2151]: I0513 00:21:26.288396 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:26.288911 kubelet[2151]: E0513 00:21:26.288697 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 13 00:21:26.495712 kubelet[2151]: E0513 00:21:26.495659 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:27.140890 kubelet[2151]: W0513 00:21:27.140817 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:27.140890 kubelet[2151]: E0513 00:21:27.140881 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:27.344848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423558824.mount: Deactivated successfully. May 13 00:21:27.349986 containerd[1471]: time="2025-05-13T00:21:27.349943489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:27.350754 containerd[1471]: time="2025-05-13T00:21:27.350707744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 00:21:27.353574 containerd[1471]: time="2025-05-13T00:21:27.353541967Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:27.354787 containerd[1471]: time="2025-05-13T00:21:27.354733674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:27.355508 containerd[1471]: time="2025-05-13T00:21:27.355465949Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:27.356177 containerd[1471]: time="2025-05-13T00:21:27.356058723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:21:27.357033 containerd[1471]: time="2025-05-13T00:21:27.356980648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:21:27.358084 containerd[1471]: time="2025-05-13T00:21:27.358052019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:27.358932 containerd[1471]: time="2025-05-13T00:21:27.358906495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.248559228s" May 13 00:21:27.361363 kubelet[2151]: W0513 00:21:27.361292 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:27.361363 kubelet[2151]: E0513 00:21:27.361335 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:27.363825 containerd[1471]: time="2025-05-13T00:21:27.363784755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.249811562s" May 13 00:21:27.365437 containerd[1471]: time="2025-05-13T00:21:27.365400055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.2477988s" May 13 00:21:27.464021 kubelet[2151]: E0513 00:21:27.461919 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="3.2s" May 13 00:21:27.586790 containerd[1471]: time="2025-05-13T00:21:27.586681141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:27.586790 containerd[1471]: time="2025-05-13T00:21:27.586747105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:27.586790 containerd[1471]: time="2025-05-13T00:21:27.586761726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:27.587751 containerd[1471]: time="2025-05-13T00:21:27.587569210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:27.595784 containerd[1471]: time="2025-05-13T00:21:27.595666428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:27.595784 containerd[1471]: time="2025-05-13T00:21:27.595717591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:27.595784 containerd[1471]: time="2025-05-13T00:21:27.595731710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:27.596224 containerd[1471]: time="2025-05-13T00:21:27.595802789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:27.599536 containerd[1471]: time="2025-05-13T00:21:27.599468343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:27.599776 containerd[1471]: time="2025-05-13T00:21:27.599702216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:27.599776 containerd[1471]: time="2025-05-13T00:21:27.599740784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:27.599998 containerd[1471]: time="2025-05-13T00:21:27.599951341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:27.611017 systemd[1]: Started cri-containerd-cf9db729c0a26e40d1275138149e2b8bacf61e85cfc7724dddfe136c830c93ca.scope - libcontainer container cf9db729c0a26e40d1275138149e2b8bacf61e85cfc7724dddfe136c830c93ca. May 13 00:21:27.698064 kubelet[2151]: W0513 00:21:27.698018 2151 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused May 13 00:21:27.698064 kubelet[2151]: E0513 00:21:27.698061 2151 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" May 13 00:21:27.698999 systemd[1]: Started cri-containerd-cbfd4db47aebbff6432d972c0e43a25350ca38f283a065b44fad24d8ef85dc7d.scope - libcontainer container cbfd4db47aebbff6432d972c0e43a25350ca38f283a065b44fad24d8ef85dc7d. May 13 00:21:27.704540 systemd[1]: Started cri-containerd-e59d78fb4f6f4d4289c238506c955c6008f40c8177fadc2708bdb5a056d4b796.scope - libcontainer container e59d78fb4f6f4d4289c238506c955c6008f40c8177fadc2708bdb5a056d4b796. May 13 00:21:27.735291 containerd[1471]: time="2025-05-13T00:21:27.735156730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:694d815de2e87bd86f93ae61010e79fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf9db729c0a26e40d1275138149e2b8bacf61e85cfc7724dddfe136c830c93ca\"" May 13 00:21:27.740534 kubelet[2151]: E0513 00:21:27.739993 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:27.744952 containerd[1471]: time="2025-05-13T00:21:27.744836908Z" level=info msg="CreateContainer within sandbox \"cf9db729c0a26e40d1275138149e2b8bacf61e85cfc7724dddfe136c830c93ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:21:27.756234 containerd[1471]: time="2025-05-13T00:21:27.756124142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"e59d78fb4f6f4d4289c238506c955c6008f40c8177fadc2708bdb5a056d4b796\"" May 13 00:21:27.756516 containerd[1471]: time="2025-05-13T00:21:27.756471171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbfd4db47aebbff6432d972c0e43a25350ca38f283a065b44fad24d8ef85dc7d\"" May 13 00:21:27.757196 kubelet[2151]: E0513 00:21:27.757169 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:27.757316 kubelet[2151]: E0513 00:21:27.757290 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:27.759343 containerd[1471]: time="2025-05-13T00:21:27.759320045Z" level=info msg="CreateContainer within sandbox \"e59d78fb4f6f4d4289c238506c955c6008f40c8177fadc2708bdb5a056d4b796\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:21:27.759695 containerd[1471]: time="2025-05-13T00:21:27.759664196Z" level=info msg="CreateContainer within sandbox \"cbfd4db47aebbff6432d972c0e43a25350ca38f283a065b44fad24d8ef85dc7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:21:27.770610 containerd[1471]: time="2025-05-13T00:21:27.770564169Z" level=info msg="CreateContainer within sandbox \"cf9db729c0a26e40d1275138149e2b8bacf61e85cfc7724dddfe136c830c93ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1b25c36fd06ce762f7062a9c4bd104d9083ba42cd41b1bb09f72bd1fc67b16f5\"" May 13 00:21:27.771135 containerd[1471]: time="2025-05-13T00:21:27.771099895Z" level=info msg="StartContainer for \"1b25c36fd06ce762f7062a9c4bd104d9083ba42cd41b1bb09f72bd1fc67b16f5\"" May 13 00:21:27.784312 containerd[1471]: time="2025-05-13T00:21:27.784279563Z" level=info msg="CreateContainer within sandbox \"e59d78fb4f6f4d4289c238506c955c6008f40c8177fadc2708bdb5a056d4b796\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d0889b3d1cefda2b495ee794aef0b031f08430542972adc0e3fa2a0fd8ab3a7\"" May 13 00:21:27.784945 containerd[1471]: time="2025-05-13T00:21:27.784815609Z" level=info msg="StartContainer for \"4d0889b3d1cefda2b495ee794aef0b031f08430542972adc0e3fa2a0fd8ab3a7\"" May 13 00:21:27.786557 containerd[1471]: time="2025-05-13T00:21:27.786452630Z" level=info msg="CreateContainer within sandbox \"cbfd4db47aebbff6432d972c0e43a25350ca38f283a065b44fad24d8ef85dc7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e2874c6d220264d3bcb2e3776a873458a3ef55eb7d94da5ffb71486fe7e8d0b\"" May 13 00:21:27.786846 containerd[1471]: time="2025-05-13T00:21:27.786820237Z" level=info msg="StartContainer for \"2e2874c6d220264d3bcb2e3776a873458a3ef55eb7d94da5ffb71486fe7e8d0b\"" May 13 00:21:27.807003 systemd[1]: Started cri-containerd-1b25c36fd06ce762f7062a9c4bd104d9083ba42cd41b1bb09f72bd1fc67b16f5.scope - libcontainer container 1b25c36fd06ce762f7062a9c4bd104d9083ba42cd41b1bb09f72bd1fc67b16f5. May 13 00:21:27.810299 systemd[1]: Started cri-containerd-4d0889b3d1cefda2b495ee794aef0b031f08430542972adc0e3fa2a0fd8ab3a7.scope - libcontainer container 4d0889b3d1cefda2b495ee794aef0b031f08430542972adc0e3fa2a0fd8ab3a7. May 13 00:21:27.813295 systemd[1]: Started cri-containerd-2e2874c6d220264d3bcb2e3776a873458a3ef55eb7d94da5ffb71486fe7e8d0b.scope - libcontainer container 2e2874c6d220264d3bcb2e3776a873458a3ef55eb7d94da5ffb71486fe7e8d0b. May 13 00:21:27.889113 containerd[1471]: time="2025-05-13T00:21:27.888752666Z" level=info msg="StartContainer for \"4d0889b3d1cefda2b495ee794aef0b031f08430542972adc0e3fa2a0fd8ab3a7\" returns successfully" May 13 00:21:27.891329 kubelet[2151]: I0513 00:21:27.891310 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:27.891913 kubelet[2151]: E0513 00:21:27.891890 2151 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" May 13 00:21:27.892979 containerd[1471]: time="2025-05-13T00:21:27.892916462Z" level=info msg="StartContainer for \"1b25c36fd06ce762f7062a9c4bd104d9083ba42cd41b1bb09f72bd1fc67b16f5\" returns successfully" May 13 00:21:27.900347 containerd[1471]: time="2025-05-13T00:21:27.900304308Z" level=info msg="StartContainer for \"2e2874c6d220264d3bcb2e3776a873458a3ef55eb7d94da5ffb71486fe7e8d0b\" returns successfully" May 13 00:21:28.492986 kubelet[2151]: E0513 00:21:28.492743 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:28.492986 kubelet[2151]: E0513 00:21:28.492871 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:28.496549 kubelet[2151]: E0513 00:21:28.496026 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:28.496549 kubelet[2151]: E0513 00:21:28.496111 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:28.497231 kubelet[2151]: E0513 00:21:28.497219 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:28.497370 kubelet[2151]: E0513 00:21:28.497359 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:29.296357 kubelet[2151]: E0513 00:21:29.296303 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:21:29.499147 kubelet[2151]: E0513 00:21:29.499106 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:29.499147 kubelet[2151]: E0513 00:21:29.499129 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:29.499601 kubelet[2151]: E0513 00:21:29.499229 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:29.499601 kubelet[2151]: E0513 00:21:29.499273 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:29.657064 kubelet[2151]: E0513 00:21:29.656926 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:21:30.094191 kubelet[2151]: E0513 00:21:30.094115 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:21:30.666779 kubelet[2151]: E0513 00:21:30.666717 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:21:30.759479 kubelet[2151]: E0513 00:21:30.759437 2151 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:21:30.759625 kubelet[2151]: E0513 00:21:30.759579 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:30.998072 kubelet[2151]: E0513 00:21:30.998021 2151 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:21:31.094219 kubelet[2151]: I0513 00:21:31.094182 2151 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:31.100526 kubelet[2151]: I0513 00:21:31.100489 2151 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:21:31.158668 kubelet[2151]: I0513 00:21:31.158598 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:21:31.167243 kubelet[2151]: I0513 00:21:31.167201 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:21:31.170712 kubelet[2151]: I0513 00:21:31.170671 2151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:21:31.451123 kubelet[2151]: I0513 00:21:31.451067 2151 apiserver.go:52] "Watching apiserver" May 13 00:21:31.453564 kubelet[2151]: E0513 00:21:31.453536 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:31.453564 kubelet[2151]: E0513 00:21:31.453573 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:31.457176 kubelet[2151]: I0513 00:21:31.457150 2151 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:21:31.501941 kubelet[2151]: E0513 00:21:31.501900 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:31.832139 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-7.scope)... May 13 00:21:31.832160 systemd[1]: Reloading... May 13 00:21:31.924900 zram_generator::config[2473]: No configuration found. May 13 00:21:32.032490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:21:32.127211 systemd[1]: Reloading finished in 294 ms. May 13 00:21:32.140687 kubelet[2151]: E0513 00:21:32.140632 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:32.172252 kubelet[2151]: I0513 00:21:32.172154 2151 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:21:32.172240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:32.191442 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:21:32.192219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:32.202278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:32.355095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:32.360711 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:21:32.403158 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:32.403158 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:21:32.403158 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:32.403591 kubelet[2514]: I0513 00:21:32.403156 2514 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:21:32.409417 kubelet[2514]: I0513 00:21:32.409388 2514 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:21:32.409417 kubelet[2514]: I0513 00:21:32.409409 2514 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:21:32.409658 kubelet[2514]: I0513 00:21:32.409640 2514 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:21:32.410663 kubelet[2514]: I0513 00:21:32.410644 2514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:21:32.412675 kubelet[2514]: I0513 00:21:32.412637 2514 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:21:32.416476 kubelet[2514]: E0513 00:21:32.416414 2514 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:21:32.416476 kubelet[2514]: I0513 00:21:32.416442 2514 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:21:32.420954 kubelet[2514]: I0513 00:21:32.420931 2514 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:21:32.421204 kubelet[2514]: I0513 00:21:32.421171 2514 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:21:32.421367 kubelet[2514]: I0513 00:21:32.421199 2514 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:21:32.421457 kubelet[2514]: I0513 00:21:32.421371 2514 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:21:32.421457 kubelet[2514]: I0513 00:21:32.421379 2514 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:21:32.421457 kubelet[2514]: I0513 00:21:32.421424 2514 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:32.421582 kubelet[2514]: I0513 00:21:32.421571 2514 kubelet.go:446] "Attempting to sync node with API server" May 13 00:21:32.421607 kubelet[2514]: I0513 00:21:32.421586 2514 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:21:32.421607 kubelet[2514]: I0513 00:21:32.421601 2514 kubelet.go:352] "Adding apiserver pod source" May 13 00:21:32.421652 kubelet[2514]: I0513 00:21:32.421615 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:21:32.422220 kubelet[2514]: I0513 00:21:32.422197 2514 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:21:32.424892 kubelet[2514]: I0513 00:21:32.422767 2514 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:21:32.424892 kubelet[2514]: I0513 00:21:32.423292 2514 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:21:32.424892 kubelet[2514]: I0513 00:21:32.423321 2514 server.go:1287] "Started kubelet" May 13 00:21:32.424892 kubelet[2514]: I0513 00:21:32.423678 2514 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:21:32.424892 kubelet[2514]: I0513 00:21:32.423956 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:21:32.424892 kubelet[2514]: I0513 00:21:32.424255 2514 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:21:32.425307 kubelet[2514]: I0513 00:21:32.425288 2514 server.go:490] "Adding debug handlers to kubelet server" May 13 00:21:32.428878 kubelet[2514]: I0513 00:21:32.428684 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:21:32.429889 kubelet[2514]: E0513 00:21:32.429629 2514 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:21:32.429889 kubelet[2514]: I0513 00:21:32.429794 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:21:32.431937 kubelet[2514]: I0513 00:21:32.431910 2514 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:21:32.432402 kubelet[2514]: I0513 00:21:32.432362 2514 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:21:32.432592 kubelet[2514]: I0513 00:21:32.432565 2514 reconciler.go:26] "Reconciler: start to sync state" May 13 00:21:32.435969 kubelet[2514]: I0513 00:21:32.435928 2514 factory.go:221] Registration of the systemd container factory successfully May 13 00:21:32.436329 kubelet[2514]: I0513 00:21:32.436099 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:21:32.439306 kubelet[2514]: I0513 00:21:32.439242 2514 factory.go:221] Registration of the containerd container factory successfully May 13 00:21:32.439831 kubelet[2514]: E0513 00:21:32.439789 2514 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:21:32.450786 kubelet[2514]: I0513 00:21:32.450648 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:21:32.453750 kubelet[2514]: I0513 00:21:32.452027 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:21:32.453750 kubelet[2514]: I0513 00:21:32.452070 2514 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:21:32.453750 kubelet[2514]: I0513 00:21:32.452199 2514 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:21:32.453750 kubelet[2514]: I0513 00:21:32.452207 2514 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:21:32.453750 kubelet[2514]: E0513 00:21:32.453531 2514 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:21:32.480417 kubelet[2514]: I0513 00:21:32.480325 2514 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:21:32.480417 kubelet[2514]: I0513 00:21:32.480345 2514 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:21:32.480417 kubelet[2514]: I0513 00:21:32.480364 2514 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:32.480647 kubelet[2514]: I0513 00:21:32.480523 2514 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:21:32.480647 kubelet[2514]: I0513 00:21:32.480536 2514 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:21:32.480647 kubelet[2514]: I0513 00:21:32.480558 2514 policy_none.go:49] "None policy: Start" May 13 00:21:32.480647 kubelet[2514]: I0513 00:21:32.480569 2514 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:21:32.480647 kubelet[2514]: I0513 00:21:32.480582 2514 state_mem.go:35] "Initializing new in-memory state store" May 13 00:21:32.480757 kubelet[2514]: I0513 00:21:32.480699 2514 state_mem.go:75] "Updated machine memory state" May 13 00:21:32.486771 kubelet[2514]: I0513 00:21:32.486741 2514 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:21:32.487366 kubelet[2514]: I0513 00:21:32.487258 2514 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:21:32.487366 kubelet[2514]: I0513 00:21:32.487277 2514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:21:32.488354 kubelet[2514]: I0513 00:21:32.487696 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:21:32.490018 kubelet[2514]: E0513 00:21:32.489997 2514 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:21:32.555296 kubelet[2514]: I0513 00:21:32.554956 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:21:32.555296 kubelet[2514]: I0513 00:21:32.554956 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.555296 kubelet[2514]: I0513 00:21:32.555051 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:21:32.560757 kubelet[2514]: E0513 00:21:32.560661 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:21:32.561111 kubelet[2514]: E0513 00:21:32.561062 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:21:32.561316 kubelet[2514]: E0513 00:21:32.561299 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.593723 kubelet[2514]: I0513 00:21:32.593666 2514 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:21:32.601808 kubelet[2514]: I0513 00:21:32.601769 2514 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:21:32.601945 kubelet[2514]: I0513 00:21:32.601841 2514 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:21:32.634319 kubelet[2514]: I0513 00:21:32.634250 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/694d815de2e87bd86f93ae61010e79fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"694d815de2e87bd86f93ae61010e79fd\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:32.634319 kubelet[2514]: I0513 00:21:32.634311 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/694d815de2e87bd86f93ae61010e79fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"694d815de2e87bd86f93ae61010e79fd\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:32.634496 kubelet[2514]: I0513 00:21:32.634381 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.634590 kubelet[2514]: I0513 00:21:32.634550 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.634590 kubelet[2514]: I0513 00:21:32.634585 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.634590 kubelet[2514]: I0513 00:21:32.634608 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:21:32.634799 kubelet[2514]: I0513 00:21:32.634631 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/694d815de2e87bd86f93ae61010e79fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"694d815de2e87bd86f93ae61010e79fd\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:32.634799 kubelet[2514]: I0513 00:21:32.634654 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.634799 kubelet[2514]: I0513 00:21:32.634681 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:32.862164 kubelet[2514]: E0513 00:21:32.861920 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:32.862164 kubelet[2514]: E0513 00:21:32.861920 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:32.862164 kubelet[2514]: E0513 00:21:32.862087 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:33.423020 kubelet[2514]: I0513 00:21:33.422968 2514 apiserver.go:52] "Watching apiserver" May 13 00:21:33.433175 kubelet[2514]: I0513 00:21:33.432963 2514 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:21:33.467054 kubelet[2514]: E0513 00:21:33.467019 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:33.467597 kubelet[2514]: I0513 00:21:33.467560 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:21:33.468201 kubelet[2514]: I0513 00:21:33.468174 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:21:33.482598 kubelet[2514]: E0513 00:21:33.482562 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:21:33.485650 kubelet[2514]: E0513 00:21:33.482924 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:33.489258 kubelet[2514]: E0513 00:21:33.489027 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:21:33.489258 kubelet[2514]: E0513 00:21:33.489193 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:33.580144 kubelet[2514]: I0513 00:21:33.580055 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.58003671 podStartE2EDuration="2.58003671s" podCreationTimestamp="2025-05-13 00:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:33.525323227 +0000 UTC m=+1.160692014" watchObservedRunningTime="2025-05-13 00:21:33.58003671 +0000 UTC m=+1.215405497" May 13 00:21:33.580351 kubelet[2514]: I0513 00:21:33.580173 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.580169356 podStartE2EDuration="2.580169356s" podCreationTimestamp="2025-05-13 00:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:33.558217843 +0000 UTC m=+1.193586630" watchObservedRunningTime="2025-05-13 00:21:33.580169356 +0000 UTC m=+1.215538143" May 13 00:21:33.586877 kubelet[2514]: I0513 00:21:33.586672 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.586653043 podStartE2EDuration="2.586653043s" podCreationTimestamp="2025-05-13 00:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:33.586494192 +0000 UTC m=+1.221862979" watchObservedRunningTime="2025-05-13 00:21:33.586653043 +0000 UTC m=+1.222021830" May 13 00:21:34.467959 kubelet[2514]: E0513 00:21:34.467914 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:34.468400 kubelet[2514]: E0513 00:21:34.468133 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:34.468400 kubelet[2514]: E0513 00:21:34.468317 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:35.470100 kubelet[2514]: E0513 00:21:35.470063 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:35.870083 kubelet[2514]: E0513 00:21:35.869931 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:36.408247 kubelet[2514]: I0513 00:21:36.408209 2514 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:21:36.408491 containerd[1471]: time="2025-05-13T00:21:36.408456473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:21:36.408844 kubelet[2514]: I0513 00:21:36.408673 2514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:21:36.471359 kubelet[2514]: E0513 00:21:36.471320 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:36.842896 sudo[1653]: pam_unix(sudo:session): session closed for user root May 13 00:21:36.844765 sshd[1649]: pam_unix(sshd:session): session closed for user core May 13 00:21:36.849100 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:39232.service: Deactivated successfully. May 13 00:21:36.851130 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:21:36.851322 systemd[1]: session-7.scope: Consumed 3.925s CPU time, 159.3M memory peak, 0B memory swap peak. May 13 00:21:36.851994 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. May 13 00:21:36.852810 systemd-logind[1458]: Removed session 7. May 13 00:21:37.283592 systemd[1]: Created slice kubepods-besteffort-podbcd19058_c5ff_4317_9184_6e79b336dacf.slice - libcontainer container kubepods-besteffort-podbcd19058_c5ff_4317_9184_6e79b336dacf.slice. May 13 00:21:37.368061 kubelet[2514]: I0513 00:21:37.367990 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bcd19058-c5ff-4317-9184-6e79b336dacf-kube-proxy\") pod \"kube-proxy-jm284\" (UID: \"bcd19058-c5ff-4317-9184-6e79b336dacf\") " pod="kube-system/kube-proxy-jm284" May 13 00:21:37.368061 kubelet[2514]: I0513 00:21:37.368035 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcd19058-c5ff-4317-9184-6e79b336dacf-xtables-lock\") pod \"kube-proxy-jm284\" (UID: \"bcd19058-c5ff-4317-9184-6e79b336dacf\") " pod="kube-system/kube-proxy-jm284" May 13 00:21:37.368061 kubelet[2514]: I0513 00:21:37.368050 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcd19058-c5ff-4317-9184-6e79b336dacf-lib-modules\") pod \"kube-proxy-jm284\" (UID: \"bcd19058-c5ff-4317-9184-6e79b336dacf\") " pod="kube-system/kube-proxy-jm284" May 13 00:21:37.368061 kubelet[2514]: I0513 00:21:37.368066 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxl2m\" (UniqueName: \"kubernetes.io/projected/bcd19058-c5ff-4317-9184-6e79b336dacf-kube-api-access-wxl2m\") pod \"kube-proxy-jm284\" (UID: \"bcd19058-c5ff-4317-9184-6e79b336dacf\") " pod="kube-system/kube-proxy-jm284" May 13 00:21:37.526542 systemd[1]: Created slice kubepods-besteffort-podb16bf786_1254_4d3d_9ab1_7ba2579acdb9.slice - libcontainer container kubepods-besteffort-podb16bf786_1254_4d3d_9ab1_7ba2579acdb9.slice. May 13 00:21:37.569691 kubelet[2514]: I0513 00:21:37.569554 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b16bf786-1254-4d3d-9ab1-7ba2579acdb9-var-lib-calico\") pod \"tigera-operator-789496d6f5-brzmh\" (UID: \"b16bf786-1254-4d3d-9ab1-7ba2579acdb9\") " pod="tigera-operator/tigera-operator-789496d6f5-brzmh" May 13 00:21:37.569691 kubelet[2514]: I0513 00:21:37.569600 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjfs\" (UniqueName: \"kubernetes.io/projected/b16bf786-1254-4d3d-9ab1-7ba2579acdb9-kube-api-access-kkjfs\") pod \"tigera-operator-789496d6f5-brzmh\" (UID: \"b16bf786-1254-4d3d-9ab1-7ba2579acdb9\") " pod="tigera-operator/tigera-operator-789496d6f5-brzmh" May 13 00:21:37.594848 kubelet[2514]: E0513 00:21:37.594788 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:37.598150 containerd[1471]: time="2025-05-13T00:21:37.598093002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jm284,Uid:bcd19058-c5ff-4317-9184-6e79b336dacf,Namespace:kube-system,Attempt:0,}" May 13 00:21:37.626421 containerd[1471]: time="2025-05-13T00:21:37.626300128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:37.626421 containerd[1471]: time="2025-05-13T00:21:37.626374216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:37.626421 containerd[1471]: time="2025-05-13T00:21:37.626387909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:37.626557 containerd[1471]: time="2025-05-13T00:21:37.626494576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:37.654041 systemd[1]: Started cri-containerd-99f8ce40ed1b64cddf9b71a087aad347f5ec01ead8cc39de652ca2307ed6642e.scope - libcontainer container 99f8ce40ed1b64cddf9b71a087aad347f5ec01ead8cc39de652ca2307ed6642e. May 13 00:21:37.680174 containerd[1471]: time="2025-05-13T00:21:37.680136289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jm284,Uid:bcd19058-c5ff-4317-9184-6e79b336dacf,Namespace:kube-system,Attempt:0,} returns sandbox id \"99f8ce40ed1b64cddf9b71a087aad347f5ec01ead8cc39de652ca2307ed6642e\"" May 13 00:21:37.681003 kubelet[2514]: E0513 00:21:37.680971 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:37.683451 containerd[1471]: time="2025-05-13T00:21:37.683402224Z" level=info msg="CreateContainer within sandbox \"99f8ce40ed1b64cddf9b71a087aad347f5ec01ead8cc39de652ca2307ed6642e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:21:37.703228 containerd[1471]: time="2025-05-13T00:21:37.703179526Z" level=info msg="CreateContainer within sandbox \"99f8ce40ed1b64cddf9b71a087aad347f5ec01ead8cc39de652ca2307ed6642e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcc1c773e088624cc2706cd08b6b603d85eb155a6e73695c58a5adba77088c31\"" May 13 00:21:37.703730 containerd[1471]: time="2025-05-13T00:21:37.703690394Z" level=info msg="StartContainer for \"dcc1c773e088624cc2706cd08b6b603d85eb155a6e73695c58a5adba77088c31\"" May 13 00:21:37.739043 systemd[1]: Started cri-containerd-dcc1c773e088624cc2706cd08b6b603d85eb155a6e73695c58a5adba77088c31.scope - libcontainer container dcc1c773e088624cc2706cd08b6b603d85eb155a6e73695c58a5adba77088c31. May 13 00:21:37.773949 containerd[1471]: time="2025-05-13T00:21:37.773900063Z" level=info msg="StartContainer for \"dcc1c773e088624cc2706cd08b6b603d85eb155a6e73695c58a5adba77088c31\" returns successfully" May 13 00:21:37.830706 containerd[1471]: time="2025-05-13T00:21:37.830567684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-brzmh,Uid:b16bf786-1254-4d3d-9ab1-7ba2579acdb9,Namespace:tigera-operator,Attempt:0,}" May 13 00:21:37.853585 containerd[1471]: time="2025-05-13T00:21:37.853275564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:37.853585 containerd[1471]: time="2025-05-13T00:21:37.853365891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:37.853585 containerd[1471]: time="2025-05-13T00:21:37.853382501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:37.853585 containerd[1471]: time="2025-05-13T00:21:37.853477891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:37.870999 systemd[1]: Started cri-containerd-68b5bb28f21c2a63c9c44c4be6a283275c8ce62ddee26ff260f92b1dc9b2c571.scope - libcontainer container 68b5bb28f21c2a63c9c44c4be6a283275c8ce62ddee26ff260f92b1dc9b2c571. May 13 00:21:37.909915 containerd[1471]: time="2025-05-13T00:21:37.909834343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-brzmh,Uid:b16bf786-1254-4d3d-9ab1-7ba2579acdb9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"68b5bb28f21c2a63c9c44c4be6a283275c8ce62ddee26ff260f92b1dc9b2c571\"" May 13 00:21:37.914799 containerd[1471]: time="2025-05-13T00:21:37.914763868Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:21:38.475373 kubelet[2514]: E0513 00:21:38.475336 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:38.483599 kubelet[2514]: I0513 00:21:38.483550 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jm284" podStartSLOduration=1.4835340399999999 podStartE2EDuration="1.48353404s" podCreationTimestamp="2025-05-13 00:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:38.483404551 +0000 UTC m=+6.118773338" watchObservedRunningTime="2025-05-13 00:21:38.48353404 +0000 UTC m=+6.118902827" May 13 00:21:39.288799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661135180.mount: Deactivated successfully. May 13 00:21:39.938915 kubelet[2514]: E0513 00:21:39.938839 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:40.478781 kubelet[2514]: E0513 00:21:40.478725 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:40.703356 update_engine[1460]: I20250513 00:21:40.703282 1460 update_attempter.cc:509] Updating boot flags... May 13 00:21:40.879317 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2869) May 13 00:21:40.916887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2764) May 13 00:21:40.987887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2764) May 13 00:21:41.482090 kubelet[2514]: E0513 00:21:41.482050 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:41.609084 containerd[1471]: time="2025-05-13T00:21:41.609020109Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:41.623174 containerd[1471]: time="2025-05-13T00:21:41.623110224Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 00:21:41.642451 containerd[1471]: time="2025-05-13T00:21:41.642418342Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:41.664818 containerd[1471]: time="2025-05-13T00:21:41.664785660Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:41.665451 containerd[1471]: time="2025-05-13T00:21:41.665411013Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.750612933s" May 13 00:21:41.665451 containerd[1471]: time="2025-05-13T00:21:41.665445583Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 00:21:41.667542 containerd[1471]: time="2025-05-13T00:21:41.667501327Z" level=info msg="CreateContainer within sandbox \"68b5bb28f21c2a63c9c44c4be6a283275c8ce62ddee26ff260f92b1dc9b2c571\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:21:41.942017 containerd[1471]: time="2025-05-13T00:21:41.941964332Z" level=info msg="CreateContainer within sandbox \"68b5bb28f21c2a63c9c44c4be6a283275c8ce62ddee26ff260f92b1dc9b2c571\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"997db14487e507e7d363518c02140815e48ec87ebb654f7160adb73318fd9c1b\"" May 13 00:21:41.942552 containerd[1471]: time="2025-05-13T00:21:41.942510473Z" level=info msg="StartContainer for \"997db14487e507e7d363518c02140815e48ec87ebb654f7160adb73318fd9c1b\"" May 13 00:21:41.974027 systemd[1]: Started cri-containerd-997db14487e507e7d363518c02140815e48ec87ebb654f7160adb73318fd9c1b.scope - libcontainer container 997db14487e507e7d363518c02140815e48ec87ebb654f7160adb73318fd9c1b. May 13 00:21:42.125055 containerd[1471]: time="2025-05-13T00:21:42.124990415Z" level=info msg="StartContainer for \"997db14487e507e7d363518c02140815e48ec87ebb654f7160adb73318fd9c1b\" returns successfully" May 13 00:21:42.494623 kubelet[2514]: I0513 00:21:42.494475 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-brzmh" podStartSLOduration=1.742082964 podStartE2EDuration="5.494458224s" podCreationTimestamp="2025-05-13 00:21:37 +0000 UTC" firstStartedPulling="2025-05-13 00:21:37.913675901 +0000 UTC m=+5.549044688" lastFinishedPulling="2025-05-13 00:21:41.666051161 +0000 UTC m=+9.301419948" observedRunningTime="2025-05-13 00:21:42.494239444 +0000 UTC m=+10.129608231" watchObservedRunningTime="2025-05-13 00:21:42.494458224 +0000 UTC m=+10.129827011" May 13 00:21:45.158057 systemd[1]: Created slice kubepods-besteffort-pod54ee4ee7_6a3d_449e_9eba_7771c2553dab.slice - libcontainer container kubepods-besteffort-pod54ee4ee7_6a3d_449e_9eba_7771c2553dab.slice. May 13 00:21:45.230819 kubelet[2514]: I0513 00:21:45.230580 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54ee4ee7-6a3d-449e-9eba-7771c2553dab-typha-certs\") pod \"calico-typha-787f747d64-5jgzc\" (UID: \"54ee4ee7-6a3d-449e-9eba-7771c2553dab\") " pod="calico-system/calico-typha-787f747d64-5jgzc" May 13 00:21:45.230819 kubelet[2514]: I0513 00:21:45.230648 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ee4ee7-6a3d-449e-9eba-7771c2553dab-tigera-ca-bundle\") pod \"calico-typha-787f747d64-5jgzc\" (UID: \"54ee4ee7-6a3d-449e-9eba-7771c2553dab\") " pod="calico-system/calico-typha-787f747d64-5jgzc" May 13 00:21:45.230819 kubelet[2514]: I0513 00:21:45.230679 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgp5\" (UniqueName: \"kubernetes.io/projected/54ee4ee7-6a3d-449e-9eba-7771c2553dab-kube-api-access-cdgp5\") pod \"calico-typha-787f747d64-5jgzc\" (UID: \"54ee4ee7-6a3d-449e-9eba-7771c2553dab\") " pod="calico-system/calico-typha-787f747d64-5jgzc" May 13 00:21:45.347516 systemd[1]: Created slice kubepods-besteffort-podc2561169_f0c3_493e_9f11_09b8bb7fadb0.slice - libcontainer container kubepods-besteffort-podc2561169_f0c3_493e_9f11_09b8bb7fadb0.slice. May 13 00:21:45.432265 kubelet[2514]: I0513 00:21:45.431767 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-flexvol-driver-host\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432265 kubelet[2514]: I0513 00:21:45.431837 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c2561169-f0c3-493e-9f11-09b8bb7fadb0-node-certs\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432265 kubelet[2514]: I0513 00:21:45.431882 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-var-lib-calico\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432265 kubelet[2514]: I0513 00:21:45.431909 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-policysync\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432265 kubelet[2514]: I0513 00:21:45.431933 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-var-run-calico\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432524 kubelet[2514]: I0513 00:21:45.431955 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-lib-modules\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432524 kubelet[2514]: I0513 00:21:45.431978 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-xtables-lock\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432524 kubelet[2514]: I0513 00:21:45.432005 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2561169-f0c3-493e-9f11-09b8bb7fadb0-tigera-ca-bundle\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432524 kubelet[2514]: I0513 00:21:45.432029 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-cni-bin-dir\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432524 kubelet[2514]: I0513 00:21:45.432098 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-cni-log-dir\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432669 kubelet[2514]: I0513 00:21:45.432211 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c2561169-f0c3-493e-9f11-09b8bb7fadb0-cni-net-dir\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.432669 kubelet[2514]: I0513 00:21:45.432336 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mzx\" (UniqueName: \"kubernetes.io/projected/c2561169-f0c3-493e-9f11-09b8bb7fadb0-kube-api-access-55mzx\") pod \"calico-node-pjm5j\" (UID: \"c2561169-f0c3-493e-9f11-09b8bb7fadb0\") " pod="calico-system/calico-node-pjm5j" May 13 00:21:45.449187 kubelet[2514]: E0513 00:21:45.448484 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:45.461429 kubelet[2514]: E0513 00:21:45.461388 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:45.462063 containerd[1471]: time="2025-05-13T00:21:45.462020933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787f747d64-5jgzc,Uid:54ee4ee7-6a3d-449e-9eba-7771c2553dab,Namespace:calico-system,Attempt:0,}" May 13 00:21:45.515977 containerd[1471]: time="2025-05-13T00:21:45.515716135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:45.515977 containerd[1471]: time="2025-05-13T00:21:45.515810967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:45.515977 containerd[1471]: time="2025-05-13T00:21:45.515834450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:45.520895 containerd[1471]: time="2025-05-13T00:21:45.519049748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:45.532606 kubelet[2514]: I0513 00:21:45.532548 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b656054-c5df-4336-9a83-8d89d2e6a28d-varrun\") pod \"csi-node-driver-ms9sg\" (UID: \"6b656054-c5df-4336-9a83-8d89d2e6a28d\") " pod="calico-system/csi-node-driver-ms9sg" May 13 00:21:45.532764 kubelet[2514]: I0513 00:21:45.532647 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b656054-c5df-4336-9a83-8d89d2e6a28d-registration-dir\") pod \"csi-node-driver-ms9sg\" (UID: \"6b656054-c5df-4336-9a83-8d89d2e6a28d\") " pod="calico-system/csi-node-driver-ms9sg" May 13 00:21:45.532764 kubelet[2514]: I0513 00:21:45.532698 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b656054-c5df-4336-9a83-8d89d2e6a28d-socket-dir\") pod \"csi-node-driver-ms9sg\" (UID: \"6b656054-c5df-4336-9a83-8d89d2e6a28d\") " pod="calico-system/csi-node-driver-ms9sg" May 13 00:21:45.532832 kubelet[2514]: I0513 00:21:45.532760 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdh76\" (UniqueName: \"kubernetes.io/projected/6b656054-c5df-4336-9a83-8d89d2e6a28d-kube-api-access-sdh76\") pod \"csi-node-driver-ms9sg\" (UID: \"6b656054-c5df-4336-9a83-8d89d2e6a28d\") " pod="calico-system/csi-node-driver-ms9sg" May 13 00:21:45.532892 kubelet[2514]: I0513 00:21:45.532832 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b656054-c5df-4336-9a83-8d89d2e6a28d-kubelet-dir\") pod \"csi-node-driver-ms9sg\" (UID: \"6b656054-c5df-4336-9a83-8d89d2e6a28d\") " pod="calico-system/csi-node-driver-ms9sg" May 13 00:21:45.554578 kubelet[2514]: E0513 00:21:45.553025 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.554578 kubelet[2514]: W0513 00:21:45.553064 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.554578 kubelet[2514]: E0513 00:21:45.553095 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.555947 kubelet[2514]: E0513 00:21:45.554841 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:45.562404 kubelet[2514]: E0513 00:21:45.562331 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.562404 kubelet[2514]: W0513 00:21:45.562356 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564527 kubelet[2514]: E0513 00:21:45.562697 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564527 kubelet[2514]: W0513 00:21:45.562720 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564527 kubelet[2514]: E0513 00:21:45.562997 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564527 kubelet[2514]: W0513 00:21:45.563009 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564527 kubelet[2514]: E0513 00:21:45.563171 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564527 kubelet[2514]: W0513 00:21:45.563178 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564527 kubelet[2514]: E0513 00:21:45.563337 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564527 kubelet[2514]: W0513 00:21:45.563344 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564527 kubelet[2514]: E0513 00:21:45.563363 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.564527 kubelet[2514]: E0513 00:21:45.563573 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564773 kubelet[2514]: W0513 00:21:45.563581 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.563594 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.563759 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564773 kubelet[2514]: W0513 00:21:45.563766 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.563778 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.564105 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.564773 kubelet[2514]: W0513 00:21:45.564114 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.564124 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.564166 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.564773 kubelet[2514]: E0513 00:21:45.564183 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.565003 kubelet[2514]: E0513 00:21:45.564195 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.565003 kubelet[2514]: E0513 00:21:45.564395 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.565003 kubelet[2514]: W0513 00:21:45.564404 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.565003 kubelet[2514]: E0513 00:21:45.564414 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.565003 kubelet[2514]: E0513 00:21:45.564577 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.565003 kubelet[2514]: W0513 00:21:45.564585 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.565003 kubelet[2514]: E0513 00:21:45.564593 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.565003 kubelet[2514]: E0513 00:21:45.564899 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.565224 kubelet[2514]: E0513 00:21:45.565201 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.565224 kubelet[2514]: W0513 00:21:45.565219 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.565546 kubelet[2514]: E0513 00:21:45.565234 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.565546 kubelet[2514]: E0513 00:21:45.565479 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.565546 kubelet[2514]: W0513 00:21:45.565498 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.565546 kubelet[2514]: E0513 00:21:45.565511 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.567714 kubelet[2514]: E0513 00:21:45.567677 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.567714 kubelet[2514]: W0513 00:21:45.567696 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.567714 kubelet[2514]: E0513 00:21:45.567709 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.568518 systemd[1]: Started cri-containerd-2b7889a9b65eabd27c498869478c90a01fbf07478b03ccc9f959bf22390dc2a3.scope - libcontainer container 2b7889a9b65eabd27c498869478c90a01fbf07478b03ccc9f959bf22390dc2a3. May 13 00:21:45.570024 kubelet[2514]: E0513 00:21:45.569979 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.570081 kubelet[2514]: W0513 00:21:45.570031 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.570081 kubelet[2514]: E0513 00:21:45.570047 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.571562 kubelet[2514]: E0513 00:21:45.571530 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.571562 kubelet[2514]: W0513 00:21:45.571551 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.571660 kubelet[2514]: E0513 00:21:45.571595 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.572071 kubelet[2514]: E0513 00:21:45.572040 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.572071 kubelet[2514]: W0513 00:21:45.572061 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.572138 kubelet[2514]: E0513 00:21:45.572074 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.612052 containerd[1471]: time="2025-05-13T00:21:45.611947665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787f747d64-5jgzc,Uid:54ee4ee7-6a3d-449e-9eba-7771c2553dab,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b7889a9b65eabd27c498869478c90a01fbf07478b03ccc9f959bf22390dc2a3\"" May 13 00:21:45.612623 kubelet[2514]: E0513 00:21:45.612597 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:45.613963 containerd[1471]: time="2025-05-13T00:21:45.613703996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:21:45.630205 kubelet[2514]: E0513 00:21:45.630149 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.630205 kubelet[2514]: W0513 00:21:45.630182 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.630205 kubelet[2514]: E0513 00:21:45.630208 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.630556 kubelet[2514]: E0513 00:21:45.630536 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.630556 kubelet[2514]: W0513 00:21:45.630550 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.630660 kubelet[2514]: E0513 00:21:45.630561 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.630828 kubelet[2514]: E0513 00:21:45.630810 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.630828 kubelet[2514]: W0513 00:21:45.630824 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.630950 kubelet[2514]: E0513 00:21:45.630836 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.631127 kubelet[2514]: E0513 00:21:45.631107 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.631127 kubelet[2514]: W0513 00:21:45.631122 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.631209 kubelet[2514]: E0513 00:21:45.631133 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.631365 kubelet[2514]: E0513 00:21:45.631348 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.631365 kubelet[2514]: W0513 00:21:45.631359 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.631365 kubelet[2514]: E0513 00:21:45.631367 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.633751 kubelet[2514]: E0513 00:21:45.633718 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.633751 kubelet[2514]: W0513 00:21:45.633736 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.633751 kubelet[2514]: E0513 00:21:45.633749 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.634060 kubelet[2514]: E0513 00:21:45.634036 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.634060 kubelet[2514]: W0513 00:21:45.634050 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.634197 kubelet[2514]: E0513 00:21:45.634065 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.634321 kubelet[2514]: E0513 00:21:45.634294 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.634321 kubelet[2514]: W0513 00:21:45.634310 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.634386 kubelet[2514]: E0513 00:21:45.634325 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.634553 kubelet[2514]: E0513 00:21:45.634532 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.634553 kubelet[2514]: W0513 00:21:45.634548 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.634651 kubelet[2514]: E0513 00:21:45.634564 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.634831 kubelet[2514]: E0513 00:21:45.634805 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.634831 kubelet[2514]: W0513 00:21:45.634820 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.634909 kubelet[2514]: E0513 00:21:45.634836 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.635087 kubelet[2514]: E0513 00:21:45.635069 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.635087 kubelet[2514]: W0513 00:21:45.635084 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.635137 kubelet[2514]: E0513 00:21:45.635100 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.635341 kubelet[2514]: E0513 00:21:45.635314 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.635341 kubelet[2514]: W0513 00:21:45.635327 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.635341 kubelet[2514]: E0513 00:21:45.635340 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.635588 kubelet[2514]: E0513 00:21:45.635574 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.635614 kubelet[2514]: W0513 00:21:45.635588 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.635636 kubelet[2514]: E0513 00:21:45.635616 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.635825 kubelet[2514]: E0513 00:21:45.635813 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.635848 kubelet[2514]: W0513 00:21:45.635824 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.635895 kubelet[2514]: E0513 00:21:45.635851 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.636088 kubelet[2514]: E0513 00:21:45.636074 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.636088 kubelet[2514]: W0513 00:21:45.636084 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.636175 kubelet[2514]: E0513 00:21:45.636140 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.636292 kubelet[2514]: E0513 00:21:45.636281 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.636314 kubelet[2514]: W0513 00:21:45.636291 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.636336 kubelet[2514]: E0513 00:21:45.636323 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.636539 kubelet[2514]: E0513 00:21:45.636523 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.636539 kubelet[2514]: W0513 00:21:45.636536 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.636601 kubelet[2514]: E0513 00:21:45.636566 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.636793 kubelet[2514]: E0513 00:21:45.636779 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.636793 kubelet[2514]: W0513 00:21:45.636790 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.636845 kubelet[2514]: E0513 00:21:45.636803 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.637077 kubelet[2514]: E0513 00:21:45.637053 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.637077 kubelet[2514]: W0513 00:21:45.637067 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.637148 kubelet[2514]: E0513 00:21:45.637084 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.637294 kubelet[2514]: E0513 00:21:45.637280 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.637294 kubelet[2514]: W0513 00:21:45.637289 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.637369 kubelet[2514]: E0513 00:21:45.637302 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.637543 kubelet[2514]: E0513 00:21:45.637527 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.637570 kubelet[2514]: W0513 00:21:45.637542 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.637570 kubelet[2514]: E0513 00:21:45.637563 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.637843 kubelet[2514]: E0513 00:21:45.637828 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.637843 kubelet[2514]: W0513 00:21:45.637843 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.637955 kubelet[2514]: E0513 00:21:45.637918 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.638746 kubelet[2514]: E0513 00:21:45.638721 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.638746 kubelet[2514]: W0513 00:21:45.638737 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.638820 kubelet[2514]: E0513 00:21:45.638779 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.639007 kubelet[2514]: E0513 00:21:45.638985 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.639007 kubelet[2514]: W0513 00:21:45.639000 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.639191 kubelet[2514]: E0513 00:21:45.639139 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.639297 kubelet[2514]: E0513 00:21:45.639284 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.639323 kubelet[2514]: W0513 00:21:45.639297 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.639374 kubelet[2514]: E0513 00:21:45.639351 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.639503 kubelet[2514]: E0513 00:21:45.639484 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.639503 kubelet[2514]: W0513 00:21:45.639497 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.639701 kubelet[2514]: E0513 00:21:45.639658 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.639784 kubelet[2514]: E0513 00:21:45.639765 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.639784 kubelet[2514]: W0513 00:21:45.639779 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.639903 kubelet[2514]: E0513 00:21:45.639794 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.640098 kubelet[2514]: E0513 00:21:45.640034 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.640098 kubelet[2514]: W0513 00:21:45.640048 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.640098 kubelet[2514]: E0513 00:21:45.640066 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.640322 kubelet[2514]: E0513 00:21:45.640309 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.640322 kubelet[2514]: W0513 00:21:45.640319 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.640436 kubelet[2514]: E0513 00:21:45.640328 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.656586 kubelet[2514]: E0513 00:21:45.656535 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:45.657183 containerd[1471]: time="2025-05-13T00:21:45.657130722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pjm5j,Uid:c2561169-f0c3-493e-9f11-09b8bb7fadb0,Namespace:calico-system,Attempt:0,}" May 13 00:21:45.671291 kubelet[2514]: E0513 00:21:45.671255 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.671291 kubelet[2514]: W0513 00:21:45.671277 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.671291 kubelet[2514]: E0513 00:21:45.671295 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.732619 kubelet[2514]: E0513 00:21:45.732495 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.732619 kubelet[2514]: W0513 00:21:45.732517 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.732619 kubelet[2514]: E0513 00:21:45.732537 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.874332 kubelet[2514]: E0513 00:21:45.874297 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934173 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.934276 kubelet[2514]: W0513 00:21:45.934221 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934241 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934479 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.934276 kubelet[2514]: W0513 00:21:45.934487 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934496 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934693 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.934276 kubelet[2514]: W0513 00:21:45.934700 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934708 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.934276 kubelet[2514]: E0513 00:21:45.934975 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937133 kubelet[2514]: W0513 00:21:45.934983 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937133 kubelet[2514]: E0513 00:21:45.934991 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.937133 kubelet[2514]: E0513 00:21:45.935205 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937133 kubelet[2514]: W0513 00:21:45.935215 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937133 kubelet[2514]: E0513 00:21:45.935224 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.937133 kubelet[2514]: E0513 00:21:45.935440 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937133 kubelet[2514]: W0513 00:21:45.935448 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937133 kubelet[2514]: E0513 00:21:45.935456 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.937133 kubelet[2514]: E0513 00:21:45.935679 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937133 kubelet[2514]: W0513 00:21:45.935686 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.935694 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.935906 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937870 kubelet[2514]: W0513 00:21:45.935913 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.935926 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.936137 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937870 kubelet[2514]: W0513 00:21:45.936144 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.936152 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.936431 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.937870 kubelet[2514]: W0513 00:21:45.936442 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.937870 kubelet[2514]: E0513 00:21:45.936451 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.936791 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938111 kubelet[2514]: W0513 00:21:45.936814 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.936823 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.937107 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938111 kubelet[2514]: W0513 00:21:45.937114 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.937123 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.937365 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938111 kubelet[2514]: W0513 00:21:45.937372 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.937381 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938111 kubelet[2514]: E0513 00:21:45.937619 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938327 kubelet[2514]: W0513 00:21:45.937626 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938327 kubelet[2514]: E0513 00:21:45.937635 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938327 kubelet[2514]: E0513 00:21:45.937882 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938327 kubelet[2514]: W0513 00:21:45.937890 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938327 kubelet[2514]: E0513 00:21:45.937898 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938327 kubelet[2514]: E0513 00:21:45.938137 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938327 kubelet[2514]: W0513 00:21:45.938145 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938327 kubelet[2514]: E0513 00:21:45.938153 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938490 kubelet[2514]: E0513 00:21:45.938413 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938490 kubelet[2514]: W0513 00:21:45.938434 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938490 kubelet[2514]: E0513 00:21:45.938454 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.938760 kubelet[2514]: E0513 00:21:45.938731 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.938760 kubelet[2514]: W0513 00:21:45.938754 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.938760 kubelet[2514]: E0513 00:21:45.938762 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.939064 kubelet[2514]: E0513 00:21:45.939050 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.939064 kubelet[2514]: W0513 00:21:45.939061 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.939121 kubelet[2514]: E0513 00:21:45.939072 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.939437 kubelet[2514]: E0513 00:21:45.939420 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.939437 kubelet[2514]: W0513 00:21:45.939434 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.939504 kubelet[2514]: E0513 00:21:45.939445 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.939658 kubelet[2514]: E0513 00:21:45.939647 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.939658 kubelet[2514]: W0513 00:21:45.939656 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.939718 kubelet[2514]: E0513 00:21:45.939664 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.939897 kubelet[2514]: E0513 00:21:45.939874 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.939897 kubelet[2514]: W0513 00:21:45.939884 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.939897 kubelet[2514]: E0513 00:21:45.939894 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.940100 kubelet[2514]: E0513 00:21:45.940085 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.940100 kubelet[2514]: W0513 00:21:45.940097 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.940165 kubelet[2514]: E0513 00:21:45.940107 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.940479 kubelet[2514]: E0513 00:21:45.940457 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.940479 kubelet[2514]: W0513 00:21:45.940467 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.940479 kubelet[2514]: E0513 00:21:45.940476 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:45.940669 kubelet[2514]: E0513 00:21:45.940656 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:45.940669 kubelet[2514]: W0513 00:21:45.940665 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:45.940736 kubelet[2514]: E0513 00:21:45.940672 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:46.077741 containerd[1471]: time="2025-05-13T00:21:46.077373501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:46.077741 containerd[1471]: time="2025-05-13T00:21:46.077449059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:46.077741 containerd[1471]: time="2025-05-13T00:21:46.077466578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:46.078495 containerd[1471]: time="2025-05-13T00:21:46.078389453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:46.097991 systemd[1]: Started cri-containerd-8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea.scope - libcontainer container 8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea. May 13 00:21:46.125633 containerd[1471]: time="2025-05-13T00:21:46.125541134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pjm5j,Uid:c2561169-f0c3-493e-9f11-09b8bb7fadb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\"" May 13 00:21:46.126469 kubelet[2514]: E0513 00:21:46.126420 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:46.494000 kubelet[2514]: E0513 00:21:46.493968 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:46.542765 kubelet[2514]: E0513 00:21:46.542728 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:46.542765 kubelet[2514]: W0513 00:21:46.542747 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:46.542765 kubelet[2514]: E0513 00:21:46.542764 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:46.543143 kubelet[2514]: E0513 00:21:46.543015 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:46.543143 kubelet[2514]: W0513 00:21:46.543023 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:46.543143 kubelet[2514]: E0513 00:21:46.543031 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:46.543281 kubelet[2514]: E0513 00:21:46.543261 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:46.543281 kubelet[2514]: W0513 00:21:46.543270 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:46.543281 kubelet[2514]: E0513 00:21:46.543278 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:46.543519 kubelet[2514]: E0513 00:21:46.543500 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:46.543519 kubelet[2514]: W0513 00:21:46.543510 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:46.543519 kubelet[2514]: E0513 00:21:46.543518 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:46.543776 kubelet[2514]: E0513 00:21:46.543764 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:46.543776 kubelet[2514]: W0513 00:21:46.543774 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:46.543834 kubelet[2514]: E0513 00:21:46.543781 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:47.453083 kubelet[2514]: E0513 00:21:47.453046 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:47.712529 containerd[1471]: time="2025-05-13T00:21:47.712391886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:47.713179 containerd[1471]: time="2025-05-13T00:21:47.713149664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 00:21:47.714213 containerd[1471]: time="2025-05-13T00:21:47.714191598Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:47.716081 containerd[1471]: time="2025-05-13T00:21:47.716049910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:47.716664 containerd[1471]: time="2025-05-13T00:21:47.716624044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.102893638s" May 13 00:21:47.716694 containerd[1471]: time="2025-05-13T00:21:47.716662900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 00:21:47.720699 containerd[1471]: time="2025-05-13T00:21:47.720670905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:21:47.732344 containerd[1471]: time="2025-05-13T00:21:47.732307235Z" level=info msg="CreateContainer within sandbox \"2b7889a9b65eabd27c498869478c90a01fbf07478b03ccc9f959bf22390dc2a3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:21:47.746138 containerd[1471]: time="2025-05-13T00:21:47.746089590Z" level=info msg="CreateContainer within sandbox \"2b7889a9b65eabd27c498869478c90a01fbf07478b03ccc9f959bf22390dc2a3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e7db62c0ec54342830b87f463ad8bfaec684130c670bf28458c7e94f16d7c60\"" May 13 00:21:47.749294 containerd[1471]: time="2025-05-13T00:21:47.749272679Z" level=info msg="StartContainer for \"3e7db62c0ec54342830b87f463ad8bfaec684130c670bf28458c7e94f16d7c60\"" May 13 00:21:47.771987 systemd[1]: Started cri-containerd-3e7db62c0ec54342830b87f463ad8bfaec684130c670bf28458c7e94f16d7c60.scope - libcontainer container 3e7db62c0ec54342830b87f463ad8bfaec684130c670bf28458c7e94f16d7c60. May 13 00:21:47.811764 containerd[1471]: time="2025-05-13T00:21:47.811153516Z" level=info msg="StartContainer for \"3e7db62c0ec54342830b87f463ad8bfaec684130c670bf28458c7e94f16d7c60\" returns successfully" May 13 00:21:48.501096 kubelet[2514]: E0513 00:21:48.501069 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:48.522617 kubelet[2514]: I0513 00:21:48.522475 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-787f747d64-5jgzc" podStartSLOduration=1.41524483 podStartE2EDuration="3.522458853s" podCreationTimestamp="2025-05-13 00:21:45 +0000 UTC" firstStartedPulling="2025-05-13 00:21:45.61336509 +0000 UTC m=+13.248733887" lastFinishedPulling="2025-05-13 00:21:47.720579122 +0000 UTC m=+15.355947910" observedRunningTime="2025-05-13 00:21:48.522021336 +0000 UTC m=+16.157390123" watchObservedRunningTime="2025-05-13 00:21:48.522458853 +0000 UTC m=+16.157827640" May 13 00:21:48.557380 kubelet[2514]: E0513 00:21:48.557354 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.557380 kubelet[2514]: W0513 00:21:48.557377 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.559511 kubelet[2514]: E0513 00:21:48.559481 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.559750 kubelet[2514]: E0513 00:21:48.559731 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.559750 kubelet[2514]: W0513 00:21:48.559746 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.559844 kubelet[2514]: E0513 00:21:48.559757 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.560095 kubelet[2514]: E0513 00:21:48.560070 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.560095 kubelet[2514]: W0513 00:21:48.560084 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.560175 kubelet[2514]: E0513 00:21:48.560105 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.560342 kubelet[2514]: E0513 00:21:48.560322 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.560342 kubelet[2514]: W0513 00:21:48.560335 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.560392 kubelet[2514]: E0513 00:21:48.560345 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.560610 kubelet[2514]: E0513 00:21:48.560596 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.560655 kubelet[2514]: W0513 00:21:48.560611 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.560655 kubelet[2514]: E0513 00:21:48.560621 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.560908 kubelet[2514]: E0513 00:21:48.560845 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.560908 kubelet[2514]: W0513 00:21:48.560877 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.560908 kubelet[2514]: E0513 00:21:48.560888 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.561158 kubelet[2514]: E0513 00:21:48.561128 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.561158 kubelet[2514]: W0513 00:21:48.561154 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.561252 kubelet[2514]: E0513 00:21:48.561176 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.561447 kubelet[2514]: E0513 00:21:48.561429 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.561447 kubelet[2514]: W0513 00:21:48.561440 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.561447 kubelet[2514]: E0513 00:21:48.561447 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.561706 kubelet[2514]: E0513 00:21:48.561690 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.561706 kubelet[2514]: W0513 00:21:48.561700 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.561706 kubelet[2514]: E0513 00:21:48.561708 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.561955 kubelet[2514]: E0513 00:21:48.561938 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.561955 kubelet[2514]: W0513 00:21:48.561952 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.562032 kubelet[2514]: E0513 00:21:48.561971 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.562207 kubelet[2514]: E0513 00:21:48.562190 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.562207 kubelet[2514]: W0513 00:21:48.562201 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.562207 kubelet[2514]: E0513 00:21:48.562209 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.562439 kubelet[2514]: E0513 00:21:48.562410 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.562439 kubelet[2514]: W0513 00:21:48.562419 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.562439 kubelet[2514]: E0513 00:21:48.562427 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.562751 kubelet[2514]: E0513 00:21:48.562735 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.562751 kubelet[2514]: W0513 00:21:48.562745 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.562751 kubelet[2514]: E0513 00:21:48.562752 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.563008 kubelet[2514]: E0513 00:21:48.562991 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.563008 kubelet[2514]: W0513 00:21:48.563003 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.563008 kubelet[2514]: E0513 00:21:48.563011 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.563247 kubelet[2514]: E0513 00:21:48.563218 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.563247 kubelet[2514]: W0513 00:21:48.563234 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.563247 kubelet[2514]: E0513 00:21:48.563248 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.563568 kubelet[2514]: E0513 00:21:48.563546 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.563568 kubelet[2514]: W0513 00:21:48.563560 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.563568 kubelet[2514]: E0513 00:21:48.563570 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.563829 kubelet[2514]: E0513 00:21:48.563791 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.563829 kubelet[2514]: W0513 00:21:48.563805 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.564282 kubelet[2514]: E0513 00:21:48.563814 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.566100 kubelet[2514]: E0513 00:21:48.566076 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.566150 kubelet[2514]: W0513 00:21:48.566098 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.566150 kubelet[2514]: E0513 00:21:48.566116 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.566374 kubelet[2514]: E0513 00:21:48.566360 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.566411 kubelet[2514]: W0513 00:21:48.566374 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.566411 kubelet[2514]: E0513 00:21:48.566385 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.566616 kubelet[2514]: E0513 00:21:48.566601 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.566647 kubelet[2514]: W0513 00:21:48.566615 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.566647 kubelet[2514]: E0513 00:21:48.566625 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.568082 kubelet[2514]: E0513 00:21:48.568062 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.568082 kubelet[2514]: W0513 00:21:48.568079 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.568153 kubelet[2514]: E0513 00:21:48.568095 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.568421 kubelet[2514]: E0513 00:21:48.568376 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.568470 kubelet[2514]: W0513 00:21:48.568421 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.568470 kubelet[2514]: E0513 00:21:48.568440 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.568693 kubelet[2514]: E0513 00:21:48.568674 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.568693 kubelet[2514]: W0513 00:21:48.568692 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.568766 kubelet[2514]: E0513 00:21:48.568709 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.568992 kubelet[2514]: E0513 00:21:48.568973 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.569204 kubelet[2514]: W0513 00:21:48.568988 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.569509 kubelet[2514]: E0513 00:21:48.569487 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.569714 kubelet[2514]: E0513 00:21:48.569699 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.569714 kubelet[2514]: W0513 00:21:48.569712 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.569776 kubelet[2514]: E0513 00:21:48.569727 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.570002 kubelet[2514]: E0513 00:21:48.569986 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.570002 kubelet[2514]: W0513 00:21:48.569999 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.570063 kubelet[2514]: E0513 00:21:48.570013 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.570229 kubelet[2514]: E0513 00:21:48.570215 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.570265 kubelet[2514]: W0513 00:21:48.570229 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.570287 kubelet[2514]: E0513 00:21:48.570266 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.570460 kubelet[2514]: E0513 00:21:48.570447 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.570460 kubelet[2514]: W0513 00:21:48.570457 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.570511 kubelet[2514]: E0513 00:21:48.570482 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.570709 kubelet[2514]: E0513 00:21:48.570696 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.570743 kubelet[2514]: W0513 00:21:48.570708 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.570743 kubelet[2514]: E0513 00:21:48.570725 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.571047 kubelet[2514]: E0513 00:21:48.571031 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.571047 kubelet[2514]: W0513 00:21:48.571043 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.571102 kubelet[2514]: E0513 00:21:48.571058 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.571369 kubelet[2514]: E0513 00:21:48.571349 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.571369 kubelet[2514]: W0513 00:21:48.571363 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.571417 kubelet[2514]: E0513 00:21:48.571379 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.571628 kubelet[2514]: E0513 00:21:48.571606 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.571628 kubelet[2514]: W0513 00:21:48.571619 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.571676 kubelet[2514]: E0513 00:21:48.571632 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:48.571876 kubelet[2514]: E0513 00:21:48.571847 2514 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:48.571918 kubelet[2514]: W0513 00:21:48.571879 2514 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:48.571918 kubelet[2514]: E0513 00:21:48.571887 2514 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:49.227956 containerd[1471]: time="2025-05-13T00:21:49.227890387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:49.228904 containerd[1471]: time="2025-05-13T00:21:49.228800896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 00:21:49.230037 containerd[1471]: time="2025-05-13T00:21:49.230000284Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:49.232417 containerd[1471]: time="2025-05-13T00:21:49.232379458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:49.233193 containerd[1471]: time="2025-05-13T00:21:49.233157941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.512458423s" May 13 00:21:49.233227 containerd[1471]: time="2025-05-13T00:21:49.233195463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 00:21:49.237202 containerd[1471]: time="2025-05-13T00:21:49.237176150Z" level=info msg="CreateContainer within sandbox \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:21:49.255412 containerd[1471]: time="2025-05-13T00:21:49.255368852Z" level=info msg="CreateContainer within sandbox \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73\"" May 13 00:21:49.255901 containerd[1471]: time="2025-05-13T00:21:49.255877087Z" level=info msg="StartContainer for \"68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73\"" May 13 00:21:49.289039 systemd[1]: Started cri-containerd-68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73.scope - libcontainer container 68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73. May 13 00:21:49.354091 systemd[1]: cri-containerd-68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73.scope: Deactivated successfully. May 13 00:21:49.368958 containerd[1471]: time="2025-05-13T00:21:49.368892303Z" level=info msg="StartContainer for \"68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73\" returns successfully" May 13 00:21:49.453493 kubelet[2514]: E0513 00:21:49.453205 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:49.645324 kubelet[2514]: I0513 00:21:49.645170 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:21:49.645872 kubelet[2514]: E0513 00:21:49.645518 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:49.646552 kubelet[2514]: E0513 00:21:49.646189 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:49.647463 containerd[1471]: time="2025-05-13T00:21:49.644378661Z" level=info msg="shim disconnected" id=68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73 namespace=k8s.io May 13 00:21:49.647539 containerd[1471]: time="2025-05-13T00:21:49.647463058Z" level=warning msg="cleaning up after shim disconnected" id=68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73 namespace=k8s.io May 13 00:21:49.647539 containerd[1471]: time="2025-05-13T00:21:49.647475987Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:21:49.662662 containerd[1471]: time="2025-05-13T00:21:49.662601750Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:21:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:21:49.731166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68d53d62ef202bcc5d11d9e51e7141ed4c5c187e94736aac44b08db80b512d73-rootfs.mount: Deactivated successfully. May 13 00:21:50.648456 kubelet[2514]: E0513 00:21:50.648421 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:50.649035 containerd[1471]: time="2025-05-13T00:21:50.649001353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:21:51.452550 kubelet[2514]: E0513 00:21:51.452486 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:53.456355 kubelet[2514]: E0513 00:21:53.456299 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:55.452880 kubelet[2514]: E0513 00:21:55.452786 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:56.762431 containerd[1471]: time="2025-05-13T00:21:56.762366543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:56.763209 containerd[1471]: time="2025-05-13T00:21:56.763141443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 00:21:56.764339 containerd[1471]: time="2025-05-13T00:21:56.764309850Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:56.766490 containerd[1471]: time="2025-05-13T00:21:56.766451834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:56.767253 containerd[1471]: time="2025-05-13T00:21:56.767216473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.118175494s" May 13 00:21:56.767253 containerd[1471]: time="2025-05-13T00:21:56.767249412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 00:21:56.769165 containerd[1471]: time="2025-05-13T00:21:56.769138614Z" level=info msg="CreateContainer within sandbox \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:21:56.783867 containerd[1471]: time="2025-05-13T00:21:56.783815841Z" level=info msg="CreateContainer within sandbox \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281\"" May 13 00:21:56.785448 containerd[1471]: time="2025-05-13T00:21:56.784325473Z" level=info msg="StartContainer for \"1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281\"" May 13 00:21:56.820004 systemd[1]: Started cri-containerd-1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281.scope - libcontainer container 1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281. May 13 00:21:56.852527 containerd[1471]: time="2025-05-13T00:21:56.852467868Z" level=info msg="StartContainer for \"1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281\" returns successfully" May 13 00:21:57.453494 kubelet[2514]: E0513 00:21:57.453440 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:58.479683 kubelet[2514]: E0513 00:21:58.479634 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:59.453133 kubelet[2514]: E0513 00:21:59.453074 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:21:59.463078 kubelet[2514]: E0513 00:21:59.463051 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:59.679273 systemd[1]: cri-containerd-1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281.scope: Deactivated successfully. May 13 00:21:59.699671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281-rootfs.mount: Deactivated successfully. May 13 00:21:59.763630 kubelet[2514]: I0513 00:21:59.763495 2514 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:22:00.025530 systemd[1]: Created slice kubepods-burstable-pode1b80cf1_00a9_4e0b_8b66_2efa72d2b7ca.slice - libcontainer container kubepods-burstable-pode1b80cf1_00a9_4e0b_8b66_2efa72d2b7ca.slice. May 13 00:22:00.034404 systemd[1]: Created slice kubepods-besteffort-pod793260ed_37cd_4660_a22c_c5f24697994b.slice - libcontainer container kubepods-besteffort-pod793260ed_37cd_4660_a22c_c5f24697994b.slice. May 13 00:22:00.039225 systemd[1]: Created slice kubepods-besteffort-pod60c928c1_a188_42a1_b0d8_c492716938ca.slice - libcontainer container kubepods-besteffort-pod60c928c1_a188_42a1_b0d8_c492716938ca.slice. May 13 00:22:00.044114 systemd[1]: Created slice kubepods-besteffort-pod3b97b55b_0703_40cf_9f00_a260ed5d0dc1.slice - libcontainer container kubepods-besteffort-pod3b97b55b_0703_40cf_9f00_a260ed5d0dc1.slice. May 13 00:22:00.048533 systemd[1]: Created slice kubepods-burstable-pod8a9a8a5b_440e_4b4f_8eb3_b78794cd5abf.slice - libcontainer container kubepods-burstable-pod8a9a8a5b_440e_4b4f_8eb3_b78794cd5abf.slice. May 13 00:22:00.054821 kubelet[2514]: I0513 00:22:00.054778 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db8x8\" (UniqueName: \"kubernetes.io/projected/3b97b55b-0703-40cf-9f00-a260ed5d0dc1-kube-api-access-db8x8\") pod \"calico-kube-controllers-65dcd6bcdf-dhvvt\" (UID: \"3b97b55b-0703-40cf-9f00-a260ed5d0dc1\") " pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" May 13 00:22:00.054821 kubelet[2514]: I0513 00:22:00.054815 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsmn\" (UniqueName: \"kubernetes.io/projected/8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf-kube-api-access-fwsmn\") pod \"coredns-668d6bf9bc-5xmrr\" (UID: \"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf\") " pod="kube-system/coredns-668d6bf9bc-5xmrr" May 13 00:22:00.054932 kubelet[2514]: I0513 00:22:00.054830 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b97b55b-0703-40cf-9f00-a260ed5d0dc1-tigera-ca-bundle\") pod \"calico-kube-controllers-65dcd6bcdf-dhvvt\" (UID: \"3b97b55b-0703-40cf-9f00-a260ed5d0dc1\") " pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" May 13 00:22:00.054932 kubelet[2514]: I0513 00:22:00.054850 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/793260ed-37cd-4660-a22c-c5f24697994b-calico-apiserver-certs\") pod \"calico-apiserver-5ff4dd9db7-wvgwd\" (UID: \"793260ed-37cd-4660-a22c-c5f24697994b\") " pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" May 13 00:22:00.054932 kubelet[2514]: I0513 00:22:00.054879 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca-config-volume\") pod \"coredns-668d6bf9bc-7ctn5\" (UID: \"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca\") " pod="kube-system/coredns-668d6bf9bc-7ctn5" May 13 00:22:00.055019 kubelet[2514]: I0513 00:22:00.054955 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/60c928c1-a188-42a1-b0d8-c492716938ca-calico-apiserver-certs\") pod \"calico-apiserver-5ff4dd9db7-f2txh\" (UID: \"60c928c1-a188-42a1-b0d8-c492716938ca\") " pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" May 13 00:22:00.055019 kubelet[2514]: I0513 00:22:00.055000 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wj4w\" (UniqueName: \"kubernetes.io/projected/793260ed-37cd-4660-a22c-c5f24697994b-kube-api-access-8wj4w\") pod \"calico-apiserver-5ff4dd9db7-wvgwd\" (UID: \"793260ed-37cd-4660-a22c-c5f24697994b\") " pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" May 13 00:22:00.055068 kubelet[2514]: I0513 00:22:00.055017 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf-config-volume\") pod \"coredns-668d6bf9bc-5xmrr\" (UID: \"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf\") " pod="kube-system/coredns-668d6bf9bc-5xmrr" May 13 00:22:00.055068 kubelet[2514]: I0513 00:22:00.055038 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9l6j\" (UniqueName: \"kubernetes.io/projected/e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca-kube-api-access-x9l6j\") pod \"coredns-668d6bf9bc-7ctn5\" (UID: \"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca\") " pod="kube-system/coredns-668d6bf9bc-7ctn5" May 13 00:22:00.055135 kubelet[2514]: I0513 00:22:00.055080 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgj7\" (UniqueName: \"kubernetes.io/projected/60c928c1-a188-42a1-b0d8-c492716938ca-kube-api-access-dxgj7\") pod \"calico-apiserver-5ff4dd9db7-f2txh\" (UID: \"60c928c1-a188-42a1-b0d8-c492716938ca\") " pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" May 13 00:22:00.330318 kubelet[2514]: E0513 00:22:00.330158 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:00.350952 kubelet[2514]: E0513 00:22:00.350906 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:01.458717 systemd[1]: Created slice kubepods-besteffort-pod6b656054_c5df_4336_9a83_8d89d2e6a28d.slice - libcontainer container kubepods-besteffort-pod6b656054_c5df_4336_9a83_8d89d2e6a28d.slice. May 13 00:22:01.518312 containerd[1471]: time="2025-05-13T00:22:01.517963859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9sg,Uid:6b656054-c5df-4336-9a83-8d89d2e6a28d,Namespace:calico-system,Attempt:0,}" May 13 00:22:01.518312 containerd[1471]: time="2025-05-13T00:22:01.518012069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65dcd6bcdf-dhvvt,Uid:3b97b55b-0703-40cf-9f00-a260ed5d0dc1,Namespace:calico-system,Attempt:0,}" May 13 00:22:01.518312 containerd[1471]: time="2025-05-13T00:22:01.518183905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-f2txh,Uid:60c928c1-a188-42a1-b0d8-c492716938ca,Namespace:calico-apiserver,Attempt:0,}" May 13 00:22:01.518312 containerd[1471]: time="2025-05-13T00:22:01.518258499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ctn5,Uid:e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca,Namespace:kube-system,Attempt:0,}" May 13 00:22:01.518850 containerd[1471]: time="2025-05-13T00:22:01.518406846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-wvgwd,Uid:793260ed-37cd-4660-a22c-c5f24697994b,Namespace:calico-apiserver,Attempt:0,}" May 13 00:22:01.518850 containerd[1471]: time="2025-05-13T00:22:01.518443512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5xmrr,Uid:8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf,Namespace:kube-system,Attempt:0,}" May 13 00:22:01.853347 containerd[1471]: time="2025-05-13T00:22:01.853180613Z" level=info msg="shim disconnected" id=1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281 namespace=k8s.io May 13 00:22:01.853347 containerd[1471]: time="2025-05-13T00:22:01.853248653Z" level=warning msg="cleaning up after shim disconnected" id=1e63629acd4b0e2aadcbb655d797a7de82520548e84eaabc2fe826be72029281 namespace=k8s.io May 13 00:22:01.853347 containerd[1471]: time="2025-05-13T00:22:01.853258253Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:22:02.468393 kubelet[2514]: E0513 00:22:02.468356 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:02.469117 containerd[1471]: time="2025-05-13T00:22:02.468892657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:22:02.925440 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:52788.service - OpenSSH per-connection server daemon (10.0.0.1:52788). May 13 00:22:02.978416 sshd[3345]: Accepted publickey for core from 10.0.0.1 port 52788 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:02.980236 sshd[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:02.985254 systemd-logind[1458]: New session 8 of user core. May 13 00:22:02.996170 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:22:03.160418 sshd[3345]: pam_unix(sshd:session): session closed for user core May 13 00:22:03.164881 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:52788.service: Deactivated successfully. May 13 00:22:03.167432 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:22:03.168178 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. May 13 00:22:03.169209 systemd-logind[1458]: Removed session 8. May 13 00:22:03.649566 containerd[1471]: time="2025-05-13T00:22:03.649500673Z" level=error msg="Failed to destroy network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.650120 containerd[1471]: time="2025-05-13T00:22:03.650011624Z" level=error msg="encountered an error cleaning up failed sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.650120 containerd[1471]: time="2025-05-13T00:22:03.650057779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-f2txh,Uid:60c928c1-a188-42a1-b0d8-c492716938ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.650330 kubelet[2514]: E0513 00:22:03.650283 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.650623 kubelet[2514]: E0513 00:22:03.650359 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" May 13 00:22:03.650623 kubelet[2514]: E0513 00:22:03.650382 2514 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" May 13 00:22:03.650623 kubelet[2514]: E0513 00:22:03.650423 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff4dd9db7-f2txh_calico-apiserver(60c928c1-a188-42a1-b0d8-c492716938ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff4dd9db7-f2txh_calico-apiserver(60c928c1-a188-42a1-b0d8-c492716938ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" podUID="60c928c1-a188-42a1-b0d8-c492716938ca" May 13 00:22:03.708443 containerd[1471]: time="2025-05-13T00:22:03.708388170Z" level=error msg="Failed to destroy network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.708837 containerd[1471]: time="2025-05-13T00:22:03.708801841Z" level=error msg="encountered an error cleaning up failed sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.708897 containerd[1471]: time="2025-05-13T00:22:03.708867486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9sg,Uid:6b656054-c5df-4336-9a83-8d89d2e6a28d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.709132 kubelet[2514]: E0513 00:22:03.709087 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.709216 kubelet[2514]: E0513 00:22:03.709153 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9sg" May 13 00:22:03.709216 kubelet[2514]: E0513 00:22:03.709174 2514 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms9sg" May 13 00:22:03.709286 kubelet[2514]: E0513 00:22:03.709216 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms9sg_calico-system(6b656054-c5df-4336-9a83-8d89d2e6a28d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms9sg_calico-system(6b656054-c5df-4336-9a83-8d89d2e6a28d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:22:03.806612 containerd[1471]: time="2025-05-13T00:22:03.806549701Z" level=error msg="Failed to destroy network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.806991 containerd[1471]: time="2025-05-13T00:22:03.806964354Z" level=error msg="encountered an error cleaning up failed sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.807053 containerd[1471]: time="2025-05-13T00:22:03.807012654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65dcd6bcdf-dhvvt,Uid:3b97b55b-0703-40cf-9f00-a260ed5d0dc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.807329 kubelet[2514]: E0513 00:22:03.807286 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.807399 kubelet[2514]: E0513 00:22:03.807349 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" May 13 00:22:03.807399 kubelet[2514]: E0513 00:22:03.807368 2514 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" May 13 00:22:03.807451 kubelet[2514]: E0513 00:22:03.807414 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65dcd6bcdf-dhvvt_calico-system(3b97b55b-0703-40cf-9f00-a260ed5d0dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65dcd6bcdf-dhvvt_calico-system(3b97b55b-0703-40cf-9f00-a260ed5d0dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" podUID="3b97b55b-0703-40cf-9f00-a260ed5d0dc1" May 13 00:22:03.863475 containerd[1471]: time="2025-05-13T00:22:03.863417122Z" level=error msg="Failed to destroy network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.864117 containerd[1471]: time="2025-05-13T00:22:03.864045615Z" level=error msg="encountered an error cleaning up failed sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.864253 containerd[1471]: time="2025-05-13T00:22:03.864132484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5xmrr,Uid:8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.864425 kubelet[2514]: E0513 00:22:03.864369 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.864658 kubelet[2514]: E0513 00:22:03.864635 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5xmrr" May 13 00:22:03.864658 kubelet[2514]: E0513 00:22:03.864659 2514 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5xmrr" May 13 00:22:03.864775 kubelet[2514]: E0513 00:22:03.864718 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5xmrr_kube-system(8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5xmrr_kube-system(8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5xmrr" podUID="8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf" May 13 00:22:03.886410 containerd[1471]: time="2025-05-13T00:22:03.886355667Z" level=error msg="Failed to destroy network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.886799 containerd[1471]: time="2025-05-13T00:22:03.886767684Z" level=error msg="encountered an error cleaning up failed sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.886840 containerd[1471]: time="2025-05-13T00:22:03.886819722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ctn5,Uid:e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.887091 kubelet[2514]: E0513 00:22:03.887041 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.887154 kubelet[2514]: E0513 00:22:03.887096 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7ctn5" May 13 00:22:03.887154 kubelet[2514]: E0513 00:22:03.887117 2514 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7ctn5" May 13 00:22:03.887212 kubelet[2514]: E0513 00:22:03.887159 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7ctn5_kube-system(e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7ctn5_kube-system(e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7ctn5" podUID="e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca" May 13 00:22:03.905091 containerd[1471]: time="2025-05-13T00:22:03.904952008Z" level=error msg="Failed to destroy network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.905392 containerd[1471]: time="2025-05-13T00:22:03.905364556Z" level=error msg="encountered an error cleaning up failed sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.905483 containerd[1471]: time="2025-05-13T00:22:03.905421122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-wvgwd,Uid:793260ed-37cd-4660-a22c-c5f24697994b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.905758 kubelet[2514]: E0513 00:22:03.905678 2514 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:03.905831 kubelet[2514]: E0513 00:22:03.905770 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" May 13 00:22:03.905831 kubelet[2514]: E0513 00:22:03.905796 2514 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" May 13 00:22:03.905901 kubelet[2514]: E0513 00:22:03.905847 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff4dd9db7-wvgwd_calico-apiserver(793260ed-37cd-4660-a22c-c5f24697994b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff4dd9db7-wvgwd_calico-apiserver(793260ed-37cd-4660-a22c-c5f24697994b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" podUID="793260ed-37cd-4660-a22c-c5f24697994b" May 13 00:22:04.303392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a-shm.mount: Deactivated successfully. May 13 00:22:04.303520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e-shm.mount: Deactivated successfully. May 13 00:22:04.303618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36-shm.mount: Deactivated successfully. May 13 00:22:04.471700 kubelet[2514]: I0513 00:22:04.471664 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:04.472549 containerd[1471]: time="2025-05-13T00:22:04.472510965Z" level=info msg="StopPodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\"" May 13 00:22:04.472728 containerd[1471]: time="2025-05-13T00:22:04.472707749Z" level=info msg="Ensure that sandbox 8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7 in task-service has been cleanup successfully" May 13 00:22:04.473165 kubelet[2514]: I0513 00:22:04.473142 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:04.473932 containerd[1471]: time="2025-05-13T00:22:04.473539085Z" level=info msg="StopPodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\"" May 13 00:22:04.473932 containerd[1471]: time="2025-05-13T00:22:04.473698473Z" level=info msg="Ensure that sandbox f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee in task-service has been cleanup successfully" May 13 00:22:04.475124 kubelet[2514]: I0513 00:22:04.475101 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:04.475977 containerd[1471]: time="2025-05-13T00:22:04.475590205Z" level=info msg="StopPodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\"" May 13 00:22:04.475977 containerd[1471]: time="2025-05-13T00:22:04.475748190Z" level=info msg="Ensure that sandbox 198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36 in task-service has been cleanup successfully" May 13 00:22:04.477161 kubelet[2514]: I0513 00:22:04.477130 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:04.478431 containerd[1471]: time="2025-05-13T00:22:04.478394521Z" level=info msg="StopPodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\"" May 13 00:22:04.478606 containerd[1471]: time="2025-05-13T00:22:04.478584260Z" level=info msg="Ensure that sandbox d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e in task-service has been cleanup successfully" May 13 00:22:04.479558 kubelet[2514]: I0513 00:22:04.479494 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:04.480841 containerd[1471]: time="2025-05-13T00:22:04.480788604Z" level=info msg="StopPodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\"" May 13 00:22:04.481726 containerd[1471]: time="2025-05-13T00:22:04.481008526Z" level=info msg="Ensure that sandbox 0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647 in task-service has been cleanup successfully" May 13 00:22:04.483396 kubelet[2514]: I0513 00:22:04.483361 2514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:04.484777 containerd[1471]: time="2025-05-13T00:22:04.484663587Z" level=info msg="StopPodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\"" May 13 00:22:04.485247 containerd[1471]: time="2025-05-13T00:22:04.485123020Z" level=info msg="Ensure that sandbox a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a in task-service has been cleanup successfully" May 13 00:22:04.537225 containerd[1471]: time="2025-05-13T00:22:04.536985346Z" level=error msg="StopPodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" failed" error="failed to destroy network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:04.537523 kubelet[2514]: E0513 00:22:04.537458 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:04.537622 kubelet[2514]: E0513 00:22:04.537531 2514 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36"} May 13 00:22:04.537622 kubelet[2514]: E0513 00:22:04.537598 2514 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60c928c1-a188-42a1-b0d8-c492716938ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:04.537781 kubelet[2514]: E0513 00:22:04.537625 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60c928c1-a188-42a1-b0d8-c492716938ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" podUID="60c928c1-a188-42a1-b0d8-c492716938ca" May 13 00:22:04.541223 containerd[1471]: time="2025-05-13T00:22:04.541170414Z" level=error msg="StopPodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" failed" error="failed to destroy network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:04.541778 kubelet[2514]: E0513 00:22:04.541612 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:04.541778 kubelet[2514]: E0513 00:22:04.541667 2514 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a"} May 13 00:22:04.541778 kubelet[2514]: E0513 00:22:04.541710 2514 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b97b55b-0703-40cf-9f00-a260ed5d0dc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:04.541778 kubelet[2514]: E0513 00:22:04.541743 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b97b55b-0703-40cf-9f00-a260ed5d0dc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" podUID="3b97b55b-0703-40cf-9f00-a260ed5d0dc1" May 13 00:22:04.542123 containerd[1471]: time="2025-05-13T00:22:04.541725274Z" level=error msg="StopPodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" failed" error="failed to destroy network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:04.542276 kubelet[2514]: E0513 00:22:04.542249 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:04.542438 kubelet[2514]: E0513 00:22:04.542353 2514 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee"} May 13 00:22:04.542438 kubelet[2514]: E0513 00:22:04.542389 2514 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:04.542438 kubelet[2514]: E0513 00:22:04.542414 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7ctn5" podUID="e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca" May 13 00:22:04.547124 containerd[1471]: time="2025-05-13T00:22:04.547012305Z" level=error msg="StopPodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" failed" error="failed to destroy network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:04.547282 kubelet[2514]: E0513 00:22:04.547233 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:04.547413 kubelet[2514]: E0513 00:22:04.547286 2514 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7"} May 13 00:22:04.547413 kubelet[2514]: E0513 00:22:04.547323 2514 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"793260ed-37cd-4660-a22c-c5f24697994b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:04.547413 kubelet[2514]: E0513 00:22:04.547351 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"793260ed-37cd-4660-a22c-c5f24697994b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" podUID="793260ed-37cd-4660-a22c-c5f24697994b" May 13 00:22:04.549190 containerd[1471]: time="2025-05-13T00:22:04.549157308Z" level=error msg="StopPodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" failed" error="failed to destroy network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:04.549313 kubelet[2514]: E0513 00:22:04.549287 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:04.549378 kubelet[2514]: E0513 00:22:04.549316 2514 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e"} May 13 00:22:04.549378 kubelet[2514]: E0513 00:22:04.549339 2514 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b656054-c5df-4336-9a83-8d89d2e6a28d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:04.549378 kubelet[2514]: E0513 00:22:04.549356 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b656054-c5df-4336-9a83-8d89d2e6a28d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms9sg" podUID="6b656054-c5df-4336-9a83-8d89d2e6a28d" May 13 00:22:04.552596 containerd[1471]: time="2025-05-13T00:22:04.552561776Z" level=error msg="StopPodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" failed" error="failed to destroy network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:04.552771 kubelet[2514]: E0513 00:22:04.552710 2514 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:04.552771 kubelet[2514]: E0513 00:22:04.552754 2514 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647"} May 13 00:22:04.552901 kubelet[2514]: E0513 00:22:04.552776 2514 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:04.552901 kubelet[2514]: E0513 00:22:04.552792 2514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5xmrr" podUID="8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf" May 13 00:22:07.982093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885355804.mount: Deactivated successfully. May 13 00:22:08.176846 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:33576.service - OpenSSH per-connection server daemon (10.0.0.1:33576). May 13 00:22:08.255880 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 33576 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:08.257618 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:08.261355 systemd-logind[1458]: New session 9 of user core. May 13 00:22:08.267978 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:22:08.363888 containerd[1471]: time="2025-05-13T00:22:08.357603140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:08.467748 containerd[1471]: time="2025-05-13T00:22:08.467649360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 00:22:08.470347 sshd[3744]: pam_unix(sshd:session): session closed for user core May 13 00:22:08.475379 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:33576.service: Deactivated successfully. May 13 00:22:08.477438 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:22:08.478161 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. May 13 00:22:08.479186 systemd-logind[1458]: Removed session 9. May 13 00:22:08.596658 containerd[1471]: time="2025-05-13T00:22:08.596512120Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:08.602833 containerd[1471]: time="2025-05-13T00:22:08.602799517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:08.603434 containerd[1471]: time="2025-05-13T00:22:08.603389408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.13445173s" May 13 00:22:08.603485 containerd[1471]: time="2025-05-13T00:22:08.603434651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 00:22:08.614195 containerd[1471]: time="2025-05-13T00:22:08.614141924Z" level=info msg="CreateContainer within sandbox \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:22:08.635361 containerd[1471]: time="2025-05-13T00:22:08.635317445Z" level=info msg="CreateContainer within sandbox \"8040d1afff7aecb72d635d81f8abd3b4eb4b18a4c1088ff772a29236a1503fea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357\"" May 13 00:22:08.635921 containerd[1471]: time="2025-05-13T00:22:08.635819226Z" level=info msg="StartContainer for \"c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357\"" May 13 00:22:08.701113 systemd[1]: Started cri-containerd-c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357.scope - libcontainer container c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357. May 13 00:22:08.911509 containerd[1471]: time="2025-05-13T00:22:08.911344426Z" level=info msg="StartContainer for \"c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357\" returns successfully" May 13 00:22:08.944277 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:22:08.944471 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:22:09.494447 kubelet[2514]: E0513 00:22:09.494360 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:09.541356 kubelet[2514]: I0513 00:22:09.541292 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pjm5j" podStartSLOduration=2.06413666 podStartE2EDuration="24.541275209s" podCreationTimestamp="2025-05-13 00:21:45 +0000 UTC" firstStartedPulling="2025-05-13 00:21:46.126977959 +0000 UTC m=+13.762346746" lastFinishedPulling="2025-05-13 00:22:08.604116508 +0000 UTC m=+36.239485295" observedRunningTime="2025-05-13 00:22:09.540830955 +0000 UTC m=+37.176199743" watchObservedRunningTime="2025-05-13 00:22:09.541275209 +0000 UTC m=+37.176643996" May 13 00:22:10.496002 kubelet[2514]: E0513 00:22:10.495962 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:10.520829 systemd[1]: run-containerd-runc-k8s.io-c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357-runc.XbIOXG.mount: Deactivated successfully. May 13 00:22:10.871053 kubelet[2514]: I0513 00:22:10.870901 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:10.871742 kubelet[2514]: E0513 00:22:10.871375 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.492894 kernel: bpftool[4058]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:22:11.497393 kubelet[2514]: E0513 00:22:11.497372 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.721220 systemd-networkd[1406]: vxlan.calico: Link UP May 13 00:22:11.721234 systemd-networkd[1406]: vxlan.calico: Gained carrier May 13 00:22:12.934050 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL May 13 00:22:13.483735 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:33588.service - OpenSSH per-connection server daemon (10.0.0.1:33588). May 13 00:22:13.541879 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 33588 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:13.543850 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:13.548261 systemd-logind[1458]: New session 10 of user core. May 13 00:22:13.561116 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:22:13.691550 sshd[4135]: pam_unix(sshd:session): session closed for user core May 13 00:22:13.695210 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:33588.service: Deactivated successfully. May 13 00:22:13.697175 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:22:13.697862 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. May 13 00:22:13.698988 systemd-logind[1458]: Removed session 10. May 13 00:22:16.453539 containerd[1471]: time="2025-05-13T00:22:16.453483832Z" level=info msg="StopPodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\"" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.501 [INFO][4166] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.501 [INFO][4166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" iface="eth0" netns="/var/run/netns/cni-1a235b1e-996a-a9dd-6013-e325b5c9f0ec" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.502 [INFO][4166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" iface="eth0" netns="/var/run/netns/cni-1a235b1e-996a-a9dd-6013-e325b5c9f0ec" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.502 [INFO][4166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" iface="eth0" netns="/var/run/netns/cni-1a235b1e-996a-a9dd-6013-e325b5c9f0ec" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.503 [INFO][4166] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.503 [INFO][4166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.555 [INFO][4175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.555 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.556 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.563 [WARNING][4175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.563 [INFO][4175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.565 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:16.570684 containerd[1471]: 2025-05-13 00:22:16.567 [INFO][4166] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:16.571196 containerd[1471]: time="2025-05-13T00:22:16.570909356Z" level=info msg="TearDown network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" successfully" May 13 00:22:16.571196 containerd[1471]: time="2025-05-13T00:22:16.570944786Z" level=info msg="StopPodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" returns successfully" May 13 00:22:16.572205 containerd[1471]: time="2025-05-13T00:22:16.571823294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9sg,Uid:6b656054-c5df-4336-9a83-8d89d2e6a28d,Namespace:calico-system,Attempt:1,}" May 13 00:22:16.573824 systemd[1]: run-netns-cni\x2d1a235b1e\x2d996a\x2da9dd\x2d6013\x2de325b5c9f0ec.mount: Deactivated successfully. May 13 00:22:16.701276 systemd-networkd[1406]: cali41971bf9613: Link UP May 13 00:22:16.701516 systemd-networkd[1406]: cali41971bf9613: Gained carrier May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.632 [INFO][4183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ms9sg-eth0 csi-node-driver- calico-system 6b656054-c5df-4336-9a83-8d89d2e6a28d 859 0 2025-05-13 00:21:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ms9sg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali41971bf9613 [] []}} ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.632 [INFO][4183] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.661 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" HandleID="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.670 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" HandleID="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005d0330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ms9sg", "timestamp":"2025-05-13 00:22:16.661729405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.670 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.670 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.670 [INFO][4198] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.672 [INFO][4198] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.677 [INFO][4198] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.680 [INFO][4198] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.682 [INFO][4198] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.684 [INFO][4198] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.684 [INFO][4198] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.685 [INFO][4198] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119 May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.688 [INFO][4198] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.694 [INFO][4198] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.694 [INFO][4198] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" host="localhost" May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.694 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:16.717468 containerd[1471]: 2025-05-13 00:22:16.694 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" HandleID="k8s-pod-network.4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.718038 containerd[1471]: 2025-05-13 00:22:16.698 [INFO][4183] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ms9sg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b656054-c5df-4336-9a83-8d89d2e6a28d", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ms9sg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41971bf9613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:16.718038 containerd[1471]: 2025-05-13 00:22:16.698 [INFO][4183] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.718038 containerd[1471]: 2025-05-13 00:22:16.698 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41971bf9613 ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.718038 containerd[1471]: 2025-05-13 00:22:16.701 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.718038 containerd[1471]: 2025-05-13 00:22:16.701 [INFO][4183] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ms9sg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b656054-c5df-4336-9a83-8d89d2e6a28d", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119", Pod:"csi-node-driver-ms9sg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41971bf9613", MAC:"12:18:99:61:a0:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:16.718038 containerd[1471]: 2025-05-13 00:22:16.711 [INFO][4183] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119" Namespace="calico-system" Pod="csi-node-driver-ms9sg" WorkloadEndpoint="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:16.750194 containerd[1471]: time="2025-05-13T00:22:16.750103814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:16.750194 containerd[1471]: time="2025-05-13T00:22:16.750164606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:16.750365 containerd[1471]: time="2025-05-13T00:22:16.750179546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:16.750365 containerd[1471]: time="2025-05-13T00:22:16.750261962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:16.775003 systemd[1]: Started cri-containerd-4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119.scope - libcontainer container 4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119. May 13 00:22:16.786007 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:16.797090 containerd[1471]: time="2025-05-13T00:22:16.797031627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms9sg,Uid:6b656054-c5df-4336-9a83-8d89d2e6a28d,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119\"" May 13 00:22:16.798946 containerd[1471]: time="2025-05-13T00:22:16.798891389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:22:17.453516 containerd[1471]: time="2025-05-13T00:22:17.453466049Z" level=info msg="StopPodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\"" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.612 [INFO][4282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.612 [INFO][4282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" iface="eth0" netns="/var/run/netns/cni-1db9daf1-d399-21df-ac8a-f2248c0223f9" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.613 [INFO][4282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" iface="eth0" netns="/var/run/netns/cni-1db9daf1-d399-21df-ac8a-f2248c0223f9" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.613 [INFO][4282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" iface="eth0" netns="/var/run/netns/cni-1db9daf1-d399-21df-ac8a-f2248c0223f9" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.613 [INFO][4282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.613 [INFO][4282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.632 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.632 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.632 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.638 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.638 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.639 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:17.644476 containerd[1471]: 2025-05-13 00:22:17.642 [INFO][4282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:17.645199 containerd[1471]: time="2025-05-13T00:22:17.644680561Z" level=info msg="TearDown network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" successfully" May 13 00:22:17.645199 containerd[1471]: time="2025-05-13T00:22:17.644719288Z" level=info msg="StopPodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" returns successfully" May 13 00:22:17.645249 kubelet[2514]: E0513 00:22:17.645178 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:17.646046 containerd[1471]: time="2025-05-13T00:22:17.646003620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ctn5,Uid:e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca,Namespace:kube-system,Attempt:1,}" May 13 00:22:17.647293 systemd[1]: run-netns-cni\x2d1db9daf1\x2dd399\x2d21df\x2dac8a\x2df2248c0223f9.mount: Deactivated successfully. May 13 00:22:17.764930 systemd-networkd[1406]: cali7594d600f58: Link UP May 13 00:22:17.766216 systemd-networkd[1406]: cali7594d600f58: Gained carrier May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.699 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0 coredns-668d6bf9bc- kube-system e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca 866 0 2025-05-13 00:21:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7ctn5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7594d600f58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.699 [INFO][4299] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.724 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" HandleID="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.731 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" HandleID="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7ctn5", "timestamp":"2025-05-13 00:22:17.72433355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.732 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.732 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.732 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.734 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.738 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.743 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.745 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.747 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.747 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.748 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296 May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.752 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.758 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.758 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" host="localhost" May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.758 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:17.777053 containerd[1471]: 2025-05-13 00:22:17.758 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" HandleID="k8s-pod-network.9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.777573 containerd[1471]: 2025-05-13 00:22:17.762 [INFO][4299] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7ctn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7594d600f58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:17.777573 containerd[1471]: 2025-05-13 00:22:17.762 [INFO][4299] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.777573 containerd[1471]: 2025-05-13 00:22:17.762 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7594d600f58 ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.777573 containerd[1471]: 2025-05-13 00:22:17.765 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.777573 containerd[1471]: 2025-05-13 00:22:17.765 [INFO][4299] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296", Pod:"coredns-668d6bf9bc-7ctn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7594d600f58", MAC:"ce:4f:da:2b:3c:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:17.777573 containerd[1471]: 2025-05-13 00:22:17.773 [INFO][4299] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ctn5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:17.800178 containerd[1471]: time="2025-05-13T00:22:17.800077251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:17.800178 containerd[1471]: time="2025-05-13T00:22:17.800131690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:17.800178 containerd[1471]: time="2025-05-13T00:22:17.800143495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:17.800341 containerd[1471]: time="2025-05-13T00:22:17.800221611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:17.823995 systemd[1]: Started cri-containerd-9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296.scope - libcontainer container 9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296. May 13 00:22:17.836875 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:17.862108 containerd[1471]: time="2025-05-13T00:22:17.862056987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ctn5,Uid:e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296\"" May 13 00:22:17.863343 kubelet[2514]: E0513 00:22:17.862807 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:17.864984 containerd[1471]: time="2025-05-13T00:22:17.864960033Z" level=info msg="CreateContainer within sandbox \"9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:22:17.882031 containerd[1471]: time="2025-05-13T00:22:17.881989190Z" level=info msg="CreateContainer within sandbox \"9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1ce6a5ede514e96c0075ff443dddb8cc881cc1caf2eb065c79eb17d2a9e644e\"" May 13 00:22:17.882739 containerd[1471]: time="2025-05-13T00:22:17.882687233Z" level=info msg="StartContainer for \"d1ce6a5ede514e96c0075ff443dddb8cc881cc1caf2eb065c79eb17d2a9e644e\"" May 13 00:22:17.915982 systemd[1]: Started cri-containerd-d1ce6a5ede514e96c0075ff443dddb8cc881cc1caf2eb065c79eb17d2a9e644e.scope - libcontainer container d1ce6a5ede514e96c0075ff443dddb8cc881cc1caf2eb065c79eb17d2a9e644e. May 13 00:22:17.948068 containerd[1471]: time="2025-05-13T00:22:17.947997875Z" level=info msg="StartContainer for \"d1ce6a5ede514e96c0075ff443dddb8cc881cc1caf2eb065c79eb17d2a9e644e\" returns successfully" May 13 00:22:17.990081 systemd-networkd[1406]: cali41971bf9613: Gained IPv6LL May 13 00:22:18.455516 containerd[1471]: time="2025-05-13T00:22:18.455464921Z" level=info msg="StopPodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\"" May 13 00:22:18.518409 kubelet[2514]: E0513 00:22:18.516764 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:18.548116 kubelet[2514]: I0513 00:22:18.548060 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7ctn5" podStartSLOduration=41.548039464 podStartE2EDuration="41.548039464s" podCreationTimestamp="2025-05-13 00:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:18.533433556 +0000 UTC m=+46.168802353" watchObservedRunningTime="2025-05-13 00:22:18.548039464 +0000 UTC m=+46.183408251" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.510 [INFO][4427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.511 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" iface="eth0" netns="/var/run/netns/cni-61baea5a-d3b8-0e18-c617-52f41ae91b28" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.511 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" iface="eth0" netns="/var/run/netns/cni-61baea5a-d3b8-0e18-c617-52f41ae91b28" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.511 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" iface="eth0" netns="/var/run/netns/cni-61baea5a-d3b8-0e18-c617-52f41ae91b28" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.511 [INFO][4427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.511 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.552 [INFO][4435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.553 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.553 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.562 [WARNING][4435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.562 [INFO][4435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.565 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:18.576703 containerd[1471]: 2025-05-13 00:22:18.573 [INFO][4427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:18.577670 containerd[1471]: time="2025-05-13T00:22:18.577617044Z" level=info msg="TearDown network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" successfully" May 13 00:22:18.577670 containerd[1471]: time="2025-05-13T00:22:18.577656302Z" level=info msg="StopPodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" returns successfully" May 13 00:22:18.578774 containerd[1471]: time="2025-05-13T00:22:18.578474254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-f2txh,Uid:60c928c1-a188-42a1-b0d8-c492716938ca,Namespace:calico-apiserver,Attempt:1,}" May 13 00:22:18.652171 containerd[1471]: time="2025-05-13T00:22:18.652014008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:18.653623 containerd[1471]: time="2025-05-13T00:22:18.652969015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 00:22:18.653724 systemd[1]: run-netns-cni\x2d61baea5a\x2dd3b8\x2d0e18\x2dc617\x2d52f41ae91b28.mount: Deactivated successfully. May 13 00:22:18.654980 containerd[1471]: time="2025-05-13T00:22:18.654817847Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:18.687241 containerd[1471]: time="2025-05-13T00:22:18.687091170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:18.689182 containerd[1471]: time="2025-05-13T00:22:18.688985163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.89003191s" May 13 00:22:18.689182 containerd[1471]: time="2025-05-13T00:22:18.689040073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 00:22:18.692538 containerd[1471]: time="2025-05-13T00:22:18.692409519Z" level=info msg="CreateContainer within sandbox \"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:22:18.704581 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:43230.service - OpenSSH per-connection server daemon (10.0.0.1:43230). May 13 00:22:18.725935 containerd[1471]: time="2025-05-13T00:22:18.725890116Z" level=info msg="CreateContainer within sandbox \"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"62a77c556cb740c0710a82c440607d25033d1de77f71d15885358289884f52bd\"" May 13 00:22:18.726442 containerd[1471]: time="2025-05-13T00:22:18.726409177Z" level=info msg="StartContainer for \"62a77c556cb740c0710a82c440607d25033d1de77f71d15885358289884f52bd\"" May 13 00:22:18.764991 systemd[1]: Started cri-containerd-62a77c556cb740c0710a82c440607d25033d1de77f71d15885358289884f52bd.scope - libcontainer container 62a77c556cb740c0710a82c440607d25033d1de77f71d15885358289884f52bd. May 13 00:22:18.766621 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 43230 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:18.769012 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:18.775325 systemd-logind[1458]: New session 11 of user core. May 13 00:22:18.785997 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:22:18.809301 containerd[1471]: time="2025-05-13T00:22:18.809247632Z" level=info msg="StartContainer for \"62a77c556cb740c0710a82c440607d25033d1de77f71d15885358289884f52bd\" returns successfully" May 13 00:22:18.810670 containerd[1471]: time="2025-05-13T00:22:18.810640438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:22:18.819329 systemd-networkd[1406]: calibdd03de0c1c: Link UP May 13 00:22:18.820690 systemd-networkd[1406]: calibdd03de0c1c: Gained carrier May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.646 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0 calico-apiserver-5ff4dd9db7- calico-apiserver 60c928c1-a188-42a1-b0d8-c492716938ca 878 0 2025-05-13 00:21:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff4dd9db7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ff4dd9db7-f2txh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibdd03de0c1c [] []}} ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.646 [INFO][4451] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.682 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" HandleID="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.692 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" HandleID="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ff4dd9db7-f2txh", "timestamp":"2025-05-13 00:22:18.68228028 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.692 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.692 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.692 [INFO][4466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.698 [INFO][4466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.791 [INFO][4466] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.796 [INFO][4466] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.798 [INFO][4466] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.800 [INFO][4466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.800 [INFO][4466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.802 [INFO][4466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605 May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.806 [INFO][4466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.813 [INFO][4466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.813 [INFO][4466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" host="localhost" May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.813 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:18.833242 containerd[1471]: 2025-05-13 00:22:18.813 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" HandleID="k8s-pod-network.7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.833962 containerd[1471]: 2025-05-13 00:22:18.816 [INFO][4451] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"60c928c1-a188-42a1-b0d8-c492716938ca", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ff4dd9db7-f2txh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdd03de0c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:18.833962 containerd[1471]: 2025-05-13 00:22:18.816 [INFO][4451] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.833962 containerd[1471]: 2025-05-13 00:22:18.816 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdd03de0c1c ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.833962 containerd[1471]: 2025-05-13 00:22:18.819 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.833962 containerd[1471]: 2025-05-13 00:22:18.820 [INFO][4451] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"60c928c1-a188-42a1-b0d8-c492716938ca", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605", Pod:"calico-apiserver-5ff4dd9db7-f2txh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdd03de0c1c", MAC:"b6:da:40:49:ef:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:18.833962 containerd[1471]: 2025-05-13 00:22:18.830 [INFO][4451] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-f2txh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:18.859050 containerd[1471]: time="2025-05-13T00:22:18.858923187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:18.859050 containerd[1471]: time="2025-05-13T00:22:18.858991384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:18.859050 containerd[1471]: time="2025-05-13T00:22:18.859029691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:18.859532 containerd[1471]: time="2025-05-13T00:22:18.859269893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:18.884057 systemd[1]: Started cri-containerd-7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605.scope - libcontainer container 7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605. May 13 00:22:18.898211 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:18.929144 containerd[1471]: time="2025-05-13T00:22:18.929104006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-f2txh,Uid:60c928c1-a188-42a1-b0d8-c492716938ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605\"" May 13 00:22:18.936116 sshd[4475]: pam_unix(sshd:session): session closed for user core May 13 00:22:18.947906 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:43230.service: Deactivated successfully. May 13 00:22:18.949696 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:22:18.950407 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. May 13 00:22:18.956106 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:43244.service - OpenSSH per-connection server daemon (10.0.0.1:43244). May 13 00:22:18.957192 systemd-logind[1458]: Removed session 11. May 13 00:22:18.993215 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 43244 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:18.994923 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:18.999247 systemd-logind[1458]: New session 12 of user core. May 13 00:22:19.010978 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:22:19.156171 sshd[4577]: pam_unix(sshd:session): session closed for user core May 13 00:22:19.166481 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:43244.service: Deactivated successfully. May 13 00:22:19.171482 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:22:19.172776 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. May 13 00:22:19.187684 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:43246.service - OpenSSH per-connection server daemon (10.0.0.1:43246). May 13 00:22:19.194250 systemd-logind[1458]: Removed session 12. May 13 00:22:19.226967 sshd[4589]: Accepted publickey for core from 10.0.0.1 port 43246 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:19.227795 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:19.232218 systemd-logind[1458]: New session 13 of user core. May 13 00:22:19.243130 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:22:19.354939 sshd[4589]: pam_unix(sshd:session): session closed for user core May 13 00:22:19.359812 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:43246.service: Deactivated successfully. May 13 00:22:19.361830 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:22:19.362571 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. May 13 00:22:19.363491 systemd-logind[1458]: Removed session 13. May 13 00:22:19.453170 containerd[1471]: time="2025-05-13T00:22:19.453072273Z" level=info msg="StopPodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\"" May 13 00:22:19.454125 containerd[1471]: time="2025-05-13T00:22:19.453196642Z" level=info msg="StopPodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\"" May 13 00:22:19.523403 kubelet[2514]: E0513 00:22:19.523369 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:19.590019 systemd-networkd[1406]: cali7594d600f58: Gained IPv6LL May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.692 [INFO][4636] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.693 [INFO][4636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" iface="eth0" netns="/var/run/netns/cni-a5ab516c-b40d-249b-5406-678b58c0ead8" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.693 [INFO][4636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" iface="eth0" netns="/var/run/netns/cni-a5ab516c-b40d-249b-5406-678b58c0ead8" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.693 [INFO][4636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" iface="eth0" netns="/var/run/netns/cni-a5ab516c-b40d-249b-5406-678b58c0ead8" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.693 [INFO][4636] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.693 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.714 [INFO][4652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.714 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.714 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.736 [WARNING][4652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.736 [INFO][4652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.883 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:19.888849 containerd[1471]: 2025-05-13 00:22:19.885 [INFO][4636] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:19.892019 containerd[1471]: time="2025-05-13T00:22:19.890006338Z" level=info msg="TearDown network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" successfully" May 13 00:22:19.892019 containerd[1471]: time="2025-05-13T00:22:19.890048272Z" level=info msg="StopPodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" returns successfully" May 13 00:22:19.892019 containerd[1471]: time="2025-05-13T00:22:19.890683185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65dcd6bcdf-dhvvt,Uid:3b97b55b-0703-40cf-9f00-a260ed5d0dc1,Namespace:calico-system,Attempt:1,}" May 13 00:22:19.893997 systemd[1]: run-netns-cni\x2da5ab516c\x2db40d\x2d249b\x2d5406\x2d678b58c0ead8.mount: Deactivated successfully. May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.690 [INFO][4635] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.691 [INFO][4635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" iface="eth0" netns="/var/run/netns/cni-09049c1e-6483-a9db-da7d-34e8e522d2f0" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.691 [INFO][4635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" iface="eth0" netns="/var/run/netns/cni-09049c1e-6483-a9db-da7d-34e8e522d2f0" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.692 [INFO][4635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" iface="eth0" netns="/var/run/netns/cni-09049c1e-6483-a9db-da7d-34e8e522d2f0" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.692 [INFO][4635] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.692 [INFO][4635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.718 [INFO][4650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.718 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.883 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.891 [WARNING][4650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.891 [INFO][4650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.893 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:19.899081 containerd[1471]: 2025-05-13 00:22:19.896 [INFO][4635] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:19.900555 containerd[1471]: time="2025-05-13T00:22:19.899397387Z" level=info msg="TearDown network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" successfully" May 13 00:22:19.900555 containerd[1471]: time="2025-05-13T00:22:19.899440833Z" level=info msg="StopPodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" returns successfully" May 13 00:22:19.900555 containerd[1471]: time="2025-05-13T00:22:19.900212000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-wvgwd,Uid:793260ed-37cd-4660-a22c-c5f24697994b,Namespace:calico-apiserver,Attempt:1,}" May 13 00:22:19.901998 systemd[1]: run-netns-cni\x2d09049c1e\x2d6483\x2da9db\x2dda7d\x2d34e8e522d2f0.mount: Deactivated successfully. May 13 00:22:20.102346 systemd-networkd[1406]: calibdd03de0c1c: Gained IPv6LL May 13 00:22:20.453170 containerd[1471]: time="2025-05-13T00:22:20.453108207Z" level=info msg="StopPodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\"" May 13 00:22:20.526285 kubelet[2514]: E0513 00:22:20.525942 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:20.559434 systemd-networkd[1406]: cali350d657f9ab: Link UP May 13 00:22:20.559643 systemd-networkd[1406]: cali350d657f9ab: Gained carrier May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.382 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0 calico-apiserver-5ff4dd9db7- calico-apiserver 793260ed-37cd-4660-a22c-c5f24697994b 913 0 2025-05-13 00:21:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff4dd9db7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ff4dd9db7-wvgwd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali350d657f9ab [] []}} ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.383 [INFO][4678] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.412 [INFO][4696] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" HandleID="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.520 [INFO][4696] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" HandleID="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ff4dd9db7-wvgwd", "timestamp":"2025-05-13 00:22:20.41203216 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.520 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.520 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.520 [INFO][4696] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.522 [INFO][4696] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.527 [INFO][4696] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.532 [INFO][4696] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.533 [INFO][4696] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.537 [INFO][4696] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.537 [INFO][4696] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.541 [INFO][4696] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543 May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.546 [INFO][4696] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.552 [INFO][4696] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.553 [INFO][4696] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" host="localhost" May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.553 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:20.581263 containerd[1471]: 2025-05-13 00:22:20.553 [INFO][4696] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" HandleID="k8s-pod-network.649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.582066 containerd[1471]: 2025-05-13 00:22:20.557 [INFO][4678] cni-plugin/k8s.go 386: Populated endpoint ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"793260ed-37cd-4660-a22c-c5f24697994b", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ff4dd9db7-wvgwd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali350d657f9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:20.582066 containerd[1471]: 2025-05-13 00:22:20.557 [INFO][4678] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.582066 containerd[1471]: 2025-05-13 00:22:20.557 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali350d657f9ab ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.582066 containerd[1471]: 2025-05-13 00:22:20.560 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.582066 containerd[1471]: 2025-05-13 00:22:20.564 [INFO][4678] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"793260ed-37cd-4660-a22c-c5f24697994b", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543", Pod:"calico-apiserver-5ff4dd9db7-wvgwd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali350d657f9ab", MAC:"56:58:62:77:13:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:20.582066 containerd[1471]: 2025-05-13 00:22:20.575 [INFO][4678] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4dd9db7-wvgwd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:20.606757 containerd[1471]: time="2025-05-13T00:22:20.606043749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:20.606757 containerd[1471]: time="2025-05-13T00:22:20.606718702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:20.606757 containerd[1471]: time="2025-05-13T00:22:20.606731889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:20.607002 containerd[1471]: time="2025-05-13T00:22:20.606835256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:20.625526 systemd[1]: Started cri-containerd-649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543.scope - libcontainer container 649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543. May 13 00:22:20.643069 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:20.662087 systemd-networkd[1406]: cali3052c93544a: Link UP May 13 00:22:20.663045 systemd-networkd[1406]: cali3052c93544a: Gained carrier May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.500 [INFO][4728] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.501 [INFO][4728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" iface="eth0" netns="/var/run/netns/cni-840eeefe-f0e6-6aa9-652a-a36368b5860d" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.501 [INFO][4728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" iface="eth0" netns="/var/run/netns/cni-840eeefe-f0e6-6aa9-652a-a36368b5860d" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.501 [INFO][4728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" iface="eth0" netns="/var/run/netns/cni-840eeefe-f0e6-6aa9-652a-a36368b5860d" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.501 [INFO][4728] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.501 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.523 [INFO][4736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.523 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.656 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.666 [WARNING][4736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.666 [INFO][4736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.668 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:20.675207 containerd[1471]: 2025-05-13 00:22:20.670 [INFO][4728] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:20.678083 containerd[1471]: time="2025-05-13T00:22:20.676927258Z" level=info msg="TearDown network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" successfully" May 13 00:22:20.678083 containerd[1471]: time="2025-05-13T00:22:20.676968711Z" level=info msg="StopPodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" returns successfully" May 13 00:22:20.678416 kubelet[2514]: E0513 00:22:20.678378 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:20.680339 systemd[1]: run-netns-cni\x2d840eeefe\x2df0e6\x2d6aa9\x2d652a\x2da36368b5860d.mount: Deactivated successfully. May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.380 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0 calico-kube-controllers-65dcd6bcdf- calico-system 3b97b55b-0703-40cf-9f00-a260ed5d0dc1 914 0 2025-05-13 00:21:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65dcd6bcdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65dcd6bcdf-dhvvt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3052c93544a [] []}} ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.380 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.417 [INFO][4694] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" HandleID="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.523 [INFO][4694] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" HandleID="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00053d520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65dcd6bcdf-dhvvt", "timestamp":"2025-05-13 00:22:20.417261922 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.523 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.553 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.554 [INFO][4694] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.623 [INFO][4694] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.629 [INFO][4694] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.634 [INFO][4694] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.637 [INFO][4694] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.639 [INFO][4694] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.639 [INFO][4694] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.642 [INFO][4694] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78 May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.648 [INFO][4694] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.656 [INFO][4694] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.656 [INFO][4694] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" host="localhost" May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.656 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:20.682723 containerd[1471]: 2025-05-13 00:22:20.656 [INFO][4694] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" HandleID="k8s-pod-network.22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.683622 containerd[1471]: 2025-05-13 00:22:20.659 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0", GenerateName:"calico-kube-controllers-65dcd6bcdf-", Namespace:"calico-system", SelfLink:"", UID:"3b97b55b-0703-40cf-9f00-a260ed5d0dc1", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65dcd6bcdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65dcd6bcdf-dhvvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3052c93544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:20.683622 containerd[1471]: 2025-05-13 00:22:20.659 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.683622 containerd[1471]: 2025-05-13 00:22:20.659 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3052c93544a ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.683622 containerd[1471]: 2025-05-13 00:22:20.662 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.683622 containerd[1471]: 2025-05-13 00:22:20.663 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0", GenerateName:"calico-kube-controllers-65dcd6bcdf-", Namespace:"calico-system", SelfLink:"", UID:"3b97b55b-0703-40cf-9f00-a260ed5d0dc1", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65dcd6bcdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78", Pod:"calico-kube-controllers-65dcd6bcdf-dhvvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3052c93544a", MAC:"4a:ab:ee:40:5c:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:20.683622 containerd[1471]: 2025-05-13 00:22:20.671 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78" Namespace="calico-system" Pod="calico-kube-controllers-65dcd6bcdf-dhvvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:20.684562 containerd[1471]: time="2025-05-13T00:22:20.684515136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5xmrr,Uid:8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf,Namespace:kube-system,Attempt:1,}" May 13 00:22:20.690671 containerd[1471]: time="2025-05-13T00:22:20.690620894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4dd9db7-wvgwd,Uid:793260ed-37cd-4660-a22c-c5f24697994b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543\"" May 13 00:22:20.712522 containerd[1471]: time="2025-05-13T00:22:20.712124553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:20.712522 containerd[1471]: time="2025-05-13T00:22:20.712199744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:20.712522 containerd[1471]: time="2025-05-13T00:22:20.712214844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:20.712522 containerd[1471]: time="2025-05-13T00:22:20.712302610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:20.737462 systemd[1]: Started cri-containerd-22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78.scope - libcontainer container 22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78. May 13 00:22:20.754380 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:20.780114 containerd[1471]: time="2025-05-13T00:22:20.780068530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65dcd6bcdf-dhvvt,Uid:3b97b55b-0703-40cf-9f00-a260ed5d0dc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78\"" May 13 00:22:21.006484 containerd[1471]: time="2025-05-13T00:22:21.006340058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:21.007686 containerd[1471]: time="2025-05-13T00:22:21.007417185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 00:22:21.008835 containerd[1471]: time="2025-05-13T00:22:21.008809363Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:21.011524 containerd[1471]: time="2025-05-13T00:22:21.011231143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:21.012150 containerd[1471]: time="2025-05-13T00:22:21.011979271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.201309885s" May 13 00:22:21.012150 containerd[1471]: time="2025-05-13T00:22:21.012009612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 00:22:21.014028 systemd-networkd[1406]: cali01b832f6add: Link UP May 13 00:22:21.014711 containerd[1471]: time="2025-05-13T00:22:21.014129918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:22:21.014711 containerd[1471]: time="2025-05-13T00:22:21.014657294Z" level=info msg="CreateContainer within sandbox \"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:22:21.015331 systemd-networkd[1406]: cali01b832f6add: Gained carrier May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.805 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0 coredns-668d6bf9bc- kube-system 8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf 924 0 2025-05-13 00:21:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5xmrr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01b832f6add [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.805 [INFO][4834] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.870 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" HandleID="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.977 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" HandleID="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000316800), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5xmrr", "timestamp":"2025-05-13 00:22:20.870114075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.977 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.977 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.977 [INFO][4871] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.980 [INFO][4871] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.984 [INFO][4871] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.989 [INFO][4871] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.990 [INFO][4871] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.993 [INFO][4871] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.993 [INFO][4871] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:20.994 [INFO][4871] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559 May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:21.002 [INFO][4871] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:21.008 [INFO][4871] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:21.008 [INFO][4871] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" host="localhost" May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:21.008 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:21.027341 containerd[1471]: 2025-05-13 00:22:21.008 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" HandleID="k8s-pod-network.20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.027952 containerd[1471]: 2025-05-13 00:22:21.011 [INFO][4834] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5xmrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01b832f6add", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:21.027952 containerd[1471]: 2025-05-13 00:22:21.011 [INFO][4834] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.027952 containerd[1471]: 2025-05-13 00:22:21.011 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01b832f6add ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.027952 containerd[1471]: 2025-05-13 00:22:21.014 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.027952 containerd[1471]: 2025-05-13 00:22:21.015 [INFO][4834] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559", Pod:"coredns-668d6bf9bc-5xmrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01b832f6add", MAC:"ce:40:45:64:b3:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:21.027952 containerd[1471]: 2025-05-13 00:22:21.024 [INFO][4834] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559" Namespace="kube-system" Pod="coredns-668d6bf9bc-5xmrr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:21.032676 containerd[1471]: time="2025-05-13T00:22:21.032640347Z" level=info msg="CreateContainer within sandbox \"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c277d2d09dd4211aacc8dc90ddcf32a0c9854071ed33daeb7adcc86d82806e97\"" May 13 00:22:21.034297 containerd[1471]: time="2025-05-13T00:22:21.034278909Z" level=info msg="StartContainer for \"c277d2d09dd4211aacc8dc90ddcf32a0c9854071ed33daeb7adcc86d82806e97\"" May 13 00:22:21.056490 containerd[1471]: time="2025-05-13T00:22:21.055589976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:21.056490 containerd[1471]: time="2025-05-13T00:22:21.056259156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:21.056490 containerd[1471]: time="2025-05-13T00:22:21.056274968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:21.056490 containerd[1471]: time="2025-05-13T00:22:21.056367011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:21.068098 systemd[1]: Started cri-containerd-c277d2d09dd4211aacc8dc90ddcf32a0c9854071ed33daeb7adcc86d82806e97.scope - libcontainer container c277d2d09dd4211aacc8dc90ddcf32a0c9854071ed33daeb7adcc86d82806e97. May 13 00:22:21.082229 systemd[1]: Started cri-containerd-20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559.scope - libcontainer container 20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559. May 13 00:22:21.100104 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:21.108943 containerd[1471]: time="2025-05-13T00:22:21.108879543Z" level=info msg="StartContainer for \"c277d2d09dd4211aacc8dc90ddcf32a0c9854071ed33daeb7adcc86d82806e97\" returns successfully" May 13 00:22:21.128494 containerd[1471]: time="2025-05-13T00:22:21.128439744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5xmrr,Uid:8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf,Namespace:kube-system,Attempt:1,} returns sandbox id \"20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559\"" May 13 00:22:21.129447 kubelet[2514]: E0513 00:22:21.129419 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.131332 containerd[1471]: time="2025-05-13T00:22:21.131298179Z" level=info msg="CreateContainer within sandbox \"20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:22:21.147607 containerd[1471]: time="2025-05-13T00:22:21.147541226Z" level=info msg="CreateContainer within sandbox \"20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4ceea599ebf5bccb6d6a8609c313d6fe2128ce91e433100792071bde679cdb3\"" May 13 00:22:21.149439 containerd[1471]: time="2025-05-13T00:22:21.148366519Z" level=info msg="StartContainer for \"e4ceea599ebf5bccb6d6a8609c313d6fe2128ce91e433100792071bde679cdb3\"" May 13 00:22:21.175986 systemd[1]: Started cri-containerd-e4ceea599ebf5bccb6d6a8609c313d6fe2128ce91e433100792071bde679cdb3.scope - libcontainer container e4ceea599ebf5bccb6d6a8609c313d6fe2128ce91e433100792071bde679cdb3. May 13 00:22:21.204223 containerd[1471]: time="2025-05-13T00:22:21.204108328Z" level=info msg="StartContainer for \"e4ceea599ebf5bccb6d6a8609c313d6fe2128ce91e433100792071bde679cdb3\" returns successfully" May 13 00:22:21.517789 kubelet[2514]: I0513 00:22:21.517749 2514 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:22:21.517789 kubelet[2514]: I0513 00:22:21.517782 2514 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:22:21.532077 kubelet[2514]: E0513 00:22:21.532039 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.622162 kubelet[2514]: I0513 00:22:21.622088 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ms9sg" podStartSLOduration=32.407766366 podStartE2EDuration="36.622069915s" podCreationTimestamp="2025-05-13 00:21:45 +0000 UTC" firstStartedPulling="2025-05-13 00:22:16.79854374 +0000 UTC m=+44.433912517" lastFinishedPulling="2025-05-13 00:22:21.012847279 +0000 UTC m=+48.648216066" observedRunningTime="2025-05-13 00:22:21.621700375 +0000 UTC m=+49.257069162" watchObservedRunningTime="2025-05-13 00:22:21.622069915 +0000 UTC m=+49.257438702" May 13 00:22:21.693795 kubelet[2514]: I0513 00:22:21.693293 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5xmrr" podStartSLOduration=44.693276873 podStartE2EDuration="44.693276873s" podCreationTimestamp="2025-05-13 00:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:21.693068737 +0000 UTC m=+49.328437534" watchObservedRunningTime="2025-05-13 00:22:21.693276873 +0000 UTC m=+49.328645650" May 13 00:22:22.278063 systemd-networkd[1406]: cali3052c93544a: Gained IPv6LL May 13 00:22:22.534089 systemd-networkd[1406]: cali350d657f9ab: Gained IPv6LL May 13 00:22:22.534646 systemd-networkd[1406]: cali01b832f6add: Gained IPv6LL May 13 00:22:22.536671 kubelet[2514]: E0513 00:22:22.536641 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:23.538641 kubelet[2514]: E0513 00:22:23.538596 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:24.375089 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:53914.service - OpenSSH per-connection server daemon (10.0.0.1:53914). May 13 00:22:24.651055 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 53914 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:24.652682 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:24.656678 systemd-logind[1458]: New session 14 of user core. May 13 00:22:24.664974 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:22:24.840060 sshd[5033]: pam_unix(sshd:session): session closed for user core May 13 00:22:24.847071 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:53914.service: Deactivated successfully. May 13 00:22:24.849411 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:22:24.850497 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. May 13 00:22:24.851486 systemd-logind[1458]: Removed session 14. May 13 00:22:25.142240 containerd[1471]: time="2025-05-13T00:22:25.142146858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:25.167105 containerd[1471]: time="2025-05-13T00:22:25.167009436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 00:22:25.201261 containerd[1471]: time="2025-05-13T00:22:25.201200373Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:25.228405 containerd[1471]: time="2025-05-13T00:22:25.228363284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:25.229210 containerd[1471]: time="2025-05-13T00:22:25.229149544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.21498638s" May 13 00:22:25.229210 containerd[1471]: time="2025-05-13T00:22:25.229177290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:22:25.230333 containerd[1471]: time="2025-05-13T00:22:25.230283489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:22:25.231510 containerd[1471]: time="2025-05-13T00:22:25.231471140Z" level=info msg="CreateContainer within sandbox \"7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:22:25.545361 containerd[1471]: time="2025-05-13T00:22:25.545304978Z" level=info msg="CreateContainer within sandbox \"7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"661943e45599339bb9050468fe6e91a4cd0e2274e16ad40bfada675041a07544\"" May 13 00:22:25.545706 containerd[1471]: time="2025-05-13T00:22:25.545672653Z" level=info msg="StartContainer for \"661943e45599339bb9050468fe6e91a4cd0e2274e16ad40bfada675041a07544\"" May 13 00:22:25.580001 systemd[1]: Started cri-containerd-661943e45599339bb9050468fe6e91a4cd0e2274e16ad40bfada675041a07544.scope - libcontainer container 661943e45599339bb9050468fe6e91a4cd0e2274e16ad40bfada675041a07544. May 13 00:22:25.711536 containerd[1471]: time="2025-05-13T00:22:25.711483642Z" level=info msg="StartContainer for \"661943e45599339bb9050468fe6e91a4cd0e2274e16ad40bfada675041a07544\" returns successfully" May 13 00:22:26.601362 kubelet[2514]: I0513 00:22:26.601257 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-f2txh" podStartSLOduration=35.301572602 podStartE2EDuration="41.601241094s" podCreationTimestamp="2025-05-13 00:21:45 +0000 UTC" firstStartedPulling="2025-05-13 00:22:18.930410639 +0000 UTC m=+46.565779426" lastFinishedPulling="2025-05-13 00:22:25.23007913 +0000 UTC m=+52.865447918" observedRunningTime="2025-05-13 00:22:26.601044812 +0000 UTC m=+54.236413599" watchObservedRunningTime="2025-05-13 00:22:26.601241094 +0000 UTC m=+54.236609882" May 13 00:22:27.196647 containerd[1471]: time="2025-05-13T00:22:27.196478812Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:27.221440 containerd[1471]: time="2025-05-13T00:22:27.221356984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 00:22:27.223506 containerd[1471]: time="2025-05-13T00:22:27.223464448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 1.99314673s" May 13 00:22:27.223506 containerd[1471]: time="2025-05-13T00:22:27.223493585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:22:27.224575 containerd[1471]: time="2025-05-13T00:22:27.224551906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:22:27.225700 containerd[1471]: time="2025-05-13T00:22:27.225664476Z" level=info msg="CreateContainer within sandbox \"649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:22:27.499250 containerd[1471]: time="2025-05-13T00:22:27.499097448Z" level=info msg="CreateContainer within sandbox \"649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0d128fe0e4d51bd1c207907b83aa69263df4d9a5cedf05d7f8c1beb36c9f35a4\"" May 13 00:22:27.499903 containerd[1471]: time="2025-05-13T00:22:27.499528097Z" level=info msg="StartContainer for \"0d128fe0e4d51bd1c207907b83aa69263df4d9a5cedf05d7f8c1beb36c9f35a4\"" May 13 00:22:27.539006 systemd[1]: Started cri-containerd-0d128fe0e4d51bd1c207907b83aa69263df4d9a5cedf05d7f8c1beb36c9f35a4.scope - libcontainer container 0d128fe0e4d51bd1c207907b83aa69263df4d9a5cedf05d7f8c1beb36c9f35a4. May 13 00:22:27.609001 containerd[1471]: time="2025-05-13T00:22:27.608955898Z" level=info msg="StartContainer for \"0d128fe0e4d51bd1c207907b83aa69263df4d9a5cedf05d7f8c1beb36c9f35a4\" returns successfully" May 13 00:22:28.606989 kubelet[2514]: I0513 00:22:28.606891 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff4dd9db7-wvgwd" podStartSLOduration=37.074764663 podStartE2EDuration="43.606853806s" podCreationTimestamp="2025-05-13 00:21:45 +0000 UTC" firstStartedPulling="2025-05-13 00:22:20.692251432 +0000 UTC m=+48.327620219" lastFinishedPulling="2025-05-13 00:22:27.224340575 +0000 UTC m=+54.859709362" observedRunningTime="2025-05-13 00:22:28.606766703 +0000 UTC m=+56.242135490" watchObservedRunningTime="2025-05-13 00:22:28.606853806 +0000 UTC m=+56.242222593" May 13 00:22:29.558973 kubelet[2514]: I0513 00:22:29.558927 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:29.859851 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:53928.service - OpenSSH per-connection server daemon (10.0.0.1:53928). May 13 00:22:29.908304 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 53928 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:29.910230 sshd[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:29.915022 systemd-logind[1458]: New session 15 of user core. May 13 00:22:29.920000 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:22:30.082107 sshd[5137]: pam_unix(sshd:session): session closed for user core May 13 00:22:30.093149 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:53928.service: Deactivated successfully. May 13 00:22:30.095733 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:22:30.097466 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. May 13 00:22:30.104141 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). May 13 00:22:30.106418 systemd-logind[1458]: Removed session 15. May 13 00:22:30.155563 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:30.158465 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:30.163658 systemd-logind[1458]: New session 16 of user core. May 13 00:22:30.170652 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:22:30.520775 containerd[1471]: time="2025-05-13T00:22:30.520708485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:30.522241 containerd[1471]: time="2025-05-13T00:22:30.522202631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 00:22:30.523556 containerd[1471]: time="2025-05-13T00:22:30.523525506Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:30.527889 containerd[1471]: time="2025-05-13T00:22:30.527845799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:30.528401 containerd[1471]: time="2025-05-13T00:22:30.528358451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.303770843s" May 13 00:22:30.528451 containerd[1471]: time="2025-05-13T00:22:30.528399212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 00:22:30.537710 containerd[1471]: time="2025-05-13T00:22:30.537224037Z" level=info msg="CreateContainer within sandbox \"22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:22:30.554839 containerd[1471]: time="2025-05-13T00:22:30.554761946Z" level=info msg="CreateContainer within sandbox \"22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6eaf1614b8565009b3db01be838d08f347e2c0e0425146ac65682761e707249e\"" May 13 00:22:30.556723 containerd[1471]: time="2025-05-13T00:22:30.555351761Z" level=info msg="StartContainer for \"6eaf1614b8565009b3db01be838d08f347e2c0e0425146ac65682761e707249e\"" May 13 00:22:30.574707 sshd[5158]: pam_unix(sshd:session): session closed for user core May 13 00:22:30.582540 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:53938.service: Deactivated successfully. May 13 00:22:30.584190 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:22:30.585878 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. May 13 00:22:30.593145 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:53950.service - OpenSSH per-connection server daemon (10.0.0.1:53950). May 13 00:22:30.599585 systemd-logind[1458]: Removed session 16. May 13 00:22:30.610010 systemd[1]: Started cri-containerd-6eaf1614b8565009b3db01be838d08f347e2c0e0425146ac65682761e707249e.scope - libcontainer container 6eaf1614b8565009b3db01be838d08f347e2c0e0425146ac65682761e707249e. May 13 00:22:30.633735 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 53950 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:30.635425 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:30.641302 systemd-logind[1458]: New session 17 of user core. May 13 00:22:30.645998 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:22:30.651720 containerd[1471]: time="2025-05-13T00:22:30.651674426Z" level=info msg="StartContainer for \"6eaf1614b8565009b3db01be838d08f347e2c0e0425146ac65682761e707249e\" returns successfully" May 13 00:22:31.485852 sshd[5180]: pam_unix(sshd:session): session closed for user core May 13 00:22:31.493180 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:53950.service: Deactivated successfully. May 13 00:22:31.495140 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:22:31.495915 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. May 13 00:22:31.505279 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). May 13 00:22:31.506498 systemd-logind[1458]: Removed session 17. May 13 00:22:31.541885 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:31.542096 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:31.549307 systemd-logind[1458]: New session 18 of user core. May 13 00:22:31.553004 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:22:31.623056 kubelet[2514]: I0513 00:22:31.622990 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65dcd6bcdf-dhvvt" podStartSLOduration=36.875150979 podStartE2EDuration="46.622973484s" podCreationTimestamp="2025-05-13 00:21:45 +0000 UTC" firstStartedPulling="2025-05-13 00:22:20.781177022 +0000 UTC m=+48.416545799" lastFinishedPulling="2025-05-13 00:22:30.528999517 +0000 UTC m=+58.164368304" observedRunningTime="2025-05-13 00:22:31.579850023 +0000 UTC m=+59.215218820" watchObservedRunningTime="2025-05-13 00:22:31.622973484 +0000 UTC m=+59.258342271" May 13 00:22:32.026871 sshd[5233]: pam_unix(sshd:session): session closed for user core May 13 00:22:32.033911 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:53966.service: Deactivated successfully. May 13 00:22:32.035546 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:22:32.037178 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. May 13 00:22:32.046545 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:53972.service - OpenSSH per-connection server daemon (10.0.0.1:53972). May 13 00:22:32.047355 systemd-logind[1458]: Removed session 18. May 13 00:22:32.083780 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 53972 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:32.085407 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:32.089675 systemd-logind[1458]: New session 19 of user core. May 13 00:22:32.096145 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:22:32.210954 sshd[5273]: pam_unix(sshd:session): session closed for user core May 13 00:22:32.215383 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:53972.service: Deactivated successfully. May 13 00:22:32.218137 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:22:32.218885 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. May 13 00:22:32.219959 systemd-logind[1458]: Removed session 19. May 13 00:22:32.443180 containerd[1471]: time="2025-05-13T00:22:32.443137783Z" level=info msg="StopPodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\"" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.481 [WARNING][5301] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"793260ed-37cd-4660-a22c-c5f24697994b", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543", Pod:"calico-apiserver-5ff4dd9db7-wvgwd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali350d657f9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.481 [INFO][5301] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.481 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" iface="eth0" netns="" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.481 [INFO][5301] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.481 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.501 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.501 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.501 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.506 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.506 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.507 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.512543 containerd[1471]: 2025-05-13 00:22:32.509 [INFO][5301] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.512979 containerd[1471]: time="2025-05-13T00:22:32.512586376Z" level=info msg="TearDown network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" successfully" May 13 00:22:32.512979 containerd[1471]: time="2025-05-13T00:22:32.512621009Z" level=info msg="StopPodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" returns successfully" May 13 00:22:32.513247 containerd[1471]: time="2025-05-13T00:22:32.513216722Z" level=info msg="RemovePodSandbox for \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\"" May 13 00:22:32.515454 containerd[1471]: time="2025-05-13T00:22:32.515420498Z" level=info msg="Forcibly stopping sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\"" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.546 [WARNING][5334] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"793260ed-37cd-4660-a22c-c5f24697994b", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"649fcb9d3c75d3552279647bfb5c50ef9f19c5b71c63df6f995625617ac51543", Pod:"calico-apiserver-5ff4dd9db7-wvgwd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali350d657f9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.546 [INFO][5334] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.546 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" iface="eth0" netns="" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.546 [INFO][5334] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.546 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.566 [INFO][5343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.566 [INFO][5343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.566 [INFO][5343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.571 [WARNING][5343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.571 [INFO][5343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" HandleID="k8s-pod-network.8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--wvgwd-eth0" May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.572 [INFO][5343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.577029 containerd[1471]: 2025-05-13 00:22:32.574 [INFO][5334] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7" May 13 00:22:32.577439 containerd[1471]: time="2025-05-13T00:22:32.577064584Z" level=info msg="TearDown network for sandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" successfully" May 13 00:22:32.582922 containerd[1471]: time="2025-05-13T00:22:32.582888381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:32.582965 containerd[1471]: time="2025-05-13T00:22:32.582947889Z" level=info msg="RemovePodSandbox \"8d9fbbb64cc3d1d16ce7b16a31a21bdedcb2cc9ca95798e6fc04a148e2889fb7\" returns successfully" May 13 00:22:32.583489 containerd[1471]: time="2025-05-13T00:22:32.583470088Z" level=info msg="StopPodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\"" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.615 [WARNING][5365] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559", Pod:"coredns-668d6bf9bc-5xmrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01b832f6add", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.616 [INFO][5365] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.616 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" iface="eth0" netns="" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.616 [INFO][5365] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.616 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.635 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.635 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.635 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.640 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.640 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.641 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.645839 containerd[1471]: 2025-05-13 00:22:32.643 [INFO][5365] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.646383 containerd[1471]: time="2025-05-13T00:22:32.645899041Z" level=info msg="TearDown network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" successfully" May 13 00:22:32.646383 containerd[1471]: time="2025-05-13T00:22:32.645923184Z" level=info msg="StopPodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" returns successfully" May 13 00:22:32.646474 containerd[1471]: time="2025-05-13T00:22:32.646435166Z" level=info msg="RemovePodSandbox for \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\"" May 13 00:22:32.646502 containerd[1471]: time="2025-05-13T00:22:32.646482441Z" level=info msg="Forcibly stopping sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\"" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.679 [WARNING][5399] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a9a8a5b-440e-4b4f-8eb3-b78794cd5abf", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20e877e457a8861b3a77b490809fc402ab1fda9d8b80b571b3202bf583312559", Pod:"coredns-668d6bf9bc-5xmrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01b832f6add", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.679 [INFO][5399] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.679 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" iface="eth0" netns="" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.679 [INFO][5399] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.679 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.698 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.698 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.698 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.703 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.703 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" HandleID="k8s-pod-network.0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" Workload="localhost-k8s-coredns--668d6bf9bc--5xmrr-eth0" May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.704 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.708938 containerd[1471]: 2025-05-13 00:22:32.706 [INFO][5399] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647" May 13 00:22:32.708938 containerd[1471]: time="2025-05-13T00:22:32.708895374Z" level=info msg="TearDown network for sandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" successfully" May 13 00:22:32.713060 containerd[1471]: time="2025-05-13T00:22:32.713034730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:32.713102 containerd[1471]: time="2025-05-13T00:22:32.713090171Z" level=info msg="RemovePodSandbox \"0b7c08c623ae89f77914254f7e3460c274674a53271566967f7717dbb6c15647\" returns successfully" May 13 00:22:32.713675 containerd[1471]: time="2025-05-13T00:22:32.713635592Z" level=info msg="StopPodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\"" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.745 [WARNING][5429] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ms9sg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b656054-c5df-4336-9a83-8d89d2e6a28d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119", Pod:"csi-node-driver-ms9sg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41971bf9613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.745 [INFO][5429] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.745 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" iface="eth0" netns="" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.745 [INFO][5429] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.745 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.764 [INFO][5437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.764 [INFO][5437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.764 [INFO][5437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.769 [WARNING][5437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.769 [INFO][5437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.771 [INFO][5437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.775543 containerd[1471]: 2025-05-13 00:22:32.773 [INFO][5429] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.776011 containerd[1471]: time="2025-05-13T00:22:32.775562451Z" level=info msg="TearDown network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" successfully" May 13 00:22:32.776011 containerd[1471]: time="2025-05-13T00:22:32.775592255Z" level=info msg="StopPodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" returns successfully" May 13 00:22:32.776207 containerd[1471]: time="2025-05-13T00:22:32.776175846Z" level=info msg="RemovePodSandbox for \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\"" May 13 00:22:32.776207 containerd[1471]: time="2025-05-13T00:22:32.776202826Z" level=info msg="Forcibly stopping sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\"" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.809 [WARNING][5460] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ms9sg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b656054-c5df-4336-9a83-8d89d2e6a28d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b31ddab4b0cd732bcff49233e0bbe53da374af60d5d4e6857a9a404b133e119", Pod:"csi-node-driver-ms9sg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41971bf9613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.809 [INFO][5460] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.809 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" iface="eth0" netns="" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.809 [INFO][5460] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.809 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.827 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.828 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.828 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.833 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.833 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" HandleID="k8s-pod-network.d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" Workload="localhost-k8s-csi--node--driver--ms9sg-eth0" May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.834 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.839267 containerd[1471]: 2025-05-13 00:22:32.836 [INFO][5460] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e" May 13 00:22:32.839779 containerd[1471]: time="2025-05-13T00:22:32.839328324Z" level=info msg="TearDown network for sandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" successfully" May 13 00:22:32.843260 containerd[1471]: time="2025-05-13T00:22:32.843232752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:32.843323 containerd[1471]: time="2025-05-13T00:22:32.843287973Z" level=info msg="RemovePodSandbox \"d096d2831a7b8e9d439aab2bb799479d936c85ff59891c8a6b50d1e62080782e\" returns successfully" May 13 00:22:32.843780 containerd[1471]: time="2025-05-13T00:22:32.843728122Z" level=info msg="StopPodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\"" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.876 [WARNING][5492] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0", GenerateName:"calico-kube-controllers-65dcd6bcdf-", Namespace:"calico-system", SelfLink:"", UID:"3b97b55b-0703-40cf-9f00-a260ed5d0dc1", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65dcd6bcdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78", Pod:"calico-kube-controllers-65dcd6bcdf-dhvvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3052c93544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.876 [INFO][5492] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.876 [INFO][5492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" iface="eth0" netns="" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.876 [INFO][5492] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.876 [INFO][5492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.895 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.895 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.895 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.900 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.900 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.902 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.906314 containerd[1471]: 2025-05-13 00:22:32.904 [INFO][5492] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.906820 containerd[1471]: time="2025-05-13T00:22:32.906343113Z" level=info msg="TearDown network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" successfully" May 13 00:22:32.906820 containerd[1471]: time="2025-05-13T00:22:32.906366235Z" level=info msg="StopPodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" returns successfully" May 13 00:22:32.907048 containerd[1471]: time="2025-05-13T00:22:32.907022669Z" level=info msg="RemovePodSandbox for \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\"" May 13 00:22:32.907048 containerd[1471]: time="2025-05-13T00:22:32.907055909Z" level=info msg="Forcibly stopping sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\"" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.948 [WARNING][5521] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0", GenerateName:"calico-kube-controllers-65dcd6bcdf-", Namespace:"calico-system", SelfLink:"", UID:"3b97b55b-0703-40cf-9f00-a260ed5d0dc1", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65dcd6bcdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22f1a0518524e59bacc4be17d6d45a9ea2d634ec81d311017e187322879aff78", Pod:"calico-kube-controllers-65dcd6bcdf-dhvvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3052c93544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.948 [INFO][5521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.948 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" iface="eth0" netns="" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.948 [INFO][5521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.948 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.968 [INFO][5530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.968 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.969 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.973 [WARNING][5530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.973 [INFO][5530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" HandleID="k8s-pod-network.a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" Workload="localhost-k8s-calico--kube--controllers--65dcd6bcdf--dhvvt-eth0" May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.974 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:32.979200 containerd[1471]: 2025-05-13 00:22:32.976 [INFO][5521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a" May 13 00:22:32.979200 containerd[1471]: time="2025-05-13T00:22:32.979136757Z" level=info msg="TearDown network for sandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" successfully" May 13 00:22:32.982998 containerd[1471]: time="2025-05-13T00:22:32.982970547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:32.983060 containerd[1471]: time="2025-05-13T00:22:32.983019877Z" level=info msg="RemovePodSandbox \"a755191581c0e81630875b5f2aa69b9a6f8fe2fe15ad9539f3f4f68f410d7c7a\" returns successfully" May 13 00:22:32.983523 containerd[1471]: time="2025-05-13T00:22:32.983491944Z" level=info msg="StopPodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\"" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.016 [WARNING][5552] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296", Pod:"coredns-668d6bf9bc-7ctn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7594d600f58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.016 [INFO][5552] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.016 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" iface="eth0" netns="" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.016 [INFO][5552] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.016 [INFO][5552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.033 [INFO][5560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.033 [INFO][5560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.033 [INFO][5560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.038 [WARNING][5560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.038 [INFO][5560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.039 [INFO][5560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:33.044001 containerd[1471]: 2025-05-13 00:22:33.041 [INFO][5552] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.044411 containerd[1471]: time="2025-05-13T00:22:33.044048284Z" level=info msg="TearDown network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" successfully" May 13 00:22:33.044411 containerd[1471]: time="2025-05-13T00:22:33.044074532Z" level=info msg="StopPodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" returns successfully" May 13 00:22:33.044560 containerd[1471]: time="2025-05-13T00:22:33.044530523Z" level=info msg="RemovePodSandbox for \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\"" May 13 00:22:33.044590 containerd[1471]: time="2025-05-13T00:22:33.044571888Z" level=info msg="Forcibly stopping sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\"" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.109 [WARNING][5583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1b80cf1-00a9-4e0b-8b66-2efa72d2b7ca", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f7be7f4e6ab9684f391f38856a6d3058654d3ad4e40408a2898075dc7ff1296", Pod:"coredns-668d6bf9bc-7ctn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7594d600f58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.109 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.109 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" iface="eth0" netns="" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.109 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.109 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.130 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.130 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.130 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.137 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.137 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" HandleID="k8s-pod-network.f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" Workload="localhost-k8s-coredns--668d6bf9bc--7ctn5-eth0" May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.138 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:33.143041 containerd[1471]: 2025-05-13 00:22:33.140 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee" May 13 00:22:33.143504 containerd[1471]: time="2025-05-13T00:22:33.143086068Z" level=info msg="TearDown network for sandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" successfully" May 13 00:22:33.147207 containerd[1471]: time="2025-05-13T00:22:33.147178546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:33.147245 containerd[1471]: time="2025-05-13T00:22:33.147223318Z" level=info msg="RemovePodSandbox \"f175422850cec9ccb7a43dfc8bbc8257194c956e57799cd461a279f7591ca7ee\" returns successfully" May 13 00:22:33.147640 containerd[1471]: time="2025-05-13T00:22:33.147607638Z" level=info msg="StopPodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\"" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.180 [WARNING][5613] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"60c928c1-a188-42a1-b0d8-c492716938ca", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605", Pod:"calico-apiserver-5ff4dd9db7-f2txh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdd03de0c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.180 [INFO][5613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.180 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" iface="eth0" netns="" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.180 [INFO][5613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.180 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.199 [INFO][5622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.200 [INFO][5622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.200 [INFO][5622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.205 [WARNING][5622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.205 [INFO][5622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.206 [INFO][5622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:33.211084 containerd[1471]: 2025-05-13 00:22:33.208 [INFO][5613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.211479 containerd[1471]: time="2025-05-13T00:22:33.211124099Z" level=info msg="TearDown network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" successfully" May 13 00:22:33.211479 containerd[1471]: time="2025-05-13T00:22:33.211149917Z" level=info msg="StopPodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" returns successfully" May 13 00:22:33.211622 containerd[1471]: time="2025-05-13T00:22:33.211600167Z" level=info msg="RemovePodSandbox for \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\"" May 13 00:22:33.211663 containerd[1471]: time="2025-05-13T00:22:33.211628057Z" level=info msg="Forcibly stopping sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\"" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.247 [WARNING][5645] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0", GenerateName:"calico-apiserver-5ff4dd9db7-", Namespace:"calico-apiserver", SelfLink:"", UID:"60c928c1-a188-42a1-b0d8-c492716938ca", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4dd9db7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7309e286b8bf7b0597b9532d79043c120c0957d3d052fd1c41d699b23f4f5605", Pod:"calico-apiserver-5ff4dd9db7-f2txh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdd03de0c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.247 [INFO][5645] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.247 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" iface="eth0" netns="" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.247 [INFO][5645] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.247 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.269 [INFO][5653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.269 [INFO][5653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.269 [INFO][5653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.275 [WARNING][5653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.275 [INFO][5653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" HandleID="k8s-pod-network.198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" Workload="localhost-k8s-calico--apiserver--5ff4dd9db7--f2txh-eth0" May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.276 [INFO][5653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:33.280985 containerd[1471]: 2025-05-13 00:22:33.278 [INFO][5645] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36" May 13 00:22:33.280985 containerd[1471]: time="2025-05-13T00:22:33.280971227Z" level=info msg="TearDown network for sandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" successfully" May 13 00:22:33.291079 containerd[1471]: time="2025-05-13T00:22:33.291037472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:33.291146 containerd[1471]: time="2025-05-13T00:22:33.291089577Z" level=info msg="RemovePodSandbox \"198044f4976b6c3dac98cb778c9bdf4c9fd4c27f27dc7b7058b5a41668397c36\" returns successfully" May 13 00:22:37.222006 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:57078.service - OpenSSH per-connection server daemon (10.0.0.1:57078). May 13 00:22:37.263349 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 57078 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:37.265578 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:37.270018 systemd-logind[1458]: New session 20 of user core. May 13 00:22:37.278029 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:22:37.388588 sshd[5661]: pam_unix(sshd:session): session closed for user core May 13 00:22:37.393804 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:57078.service: Deactivated successfully. May 13 00:22:37.396640 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:22:37.397435 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. May 13 00:22:37.398469 systemd-logind[1458]: Removed session 20. May 13 00:22:40.597951 systemd[1]: run-containerd-runc-k8s.io-c8598270c27d603d9344249f60316f4a6af8c9a9f8119cfee21266b115a41357-runc.E66DzC.mount: Deactivated successfully. May 13 00:22:40.642933 kubelet[2514]: E0513 00:22:40.642898 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:42.402177 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:57092.service - OpenSSH per-connection server daemon (10.0.0.1:57092). May 13 00:22:42.453254 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 57092 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:42.455397 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:42.459622 systemd-logind[1458]: New session 21 of user core. May 13 00:22:42.474180 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:22:42.580660 sshd[5702]: pam_unix(sshd:session): session closed for user core May 13 00:22:42.585227 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:57092.service: Deactivated successfully. May 13 00:22:42.587306 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:22:42.588149 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. May 13 00:22:42.589128 systemd-logind[1458]: Removed session 21. May 13 00:22:43.712572 kubelet[2514]: I0513 00:22:43.712490 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:47.593798 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:59606.service - OpenSSH per-connection server daemon (10.0.0.1:59606). May 13 00:22:47.635072 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 59606 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:47.637046 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:47.641953 systemd-logind[1458]: New session 22 of user core. May 13 00:22:47.653087 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:22:47.780365 sshd[5718]: pam_unix(sshd:session): session closed for user core May 13 00:22:47.786015 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:59606.service: Deactivated successfully. May 13 00:22:47.788188 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:22:47.789078 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. May 13 00:22:47.790082 systemd-logind[1458]: Removed session 22. May 13 00:22:49.452961 kubelet[2514]: E0513 00:22:49.452928 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:52.793344 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:59614.service - OpenSSH per-connection server daemon (10.0.0.1:59614). May 13 00:22:52.890044 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 59614 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:52.891989 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:52.896695 systemd-logind[1458]: New session 23 of user core. May 13 00:22:52.906070 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:22:53.018385 sshd[5739]: pam_unix(sshd:session): session closed for user core May 13 00:22:53.022662 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:59614.service: Deactivated successfully. May 13 00:22:53.025002 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:22:53.025653 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. May 13 00:22:53.026479 systemd-logind[1458]: Removed session 23.