Apr 30 00:13:47.993578 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 00:13:47.993608 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:13:47.993620 kernel: BIOS-provided physical RAM map: Apr 30 00:13:47.993626 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 00:13:47.993632 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 00:13:47.993639 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 00:13:47.993659 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 00:13:47.993666 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 00:13:47.993673 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 30 00:13:47.993680 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 30 00:13:47.993690 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 30 00:13:47.993696 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 30 00:13:47.993705 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 30 00:13:47.993712 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 30 00:13:47.993722 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 30 00:13:47.993729 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 00:13:47.993738 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Apr 30 00:13:47.993745 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Apr 30 00:13:47.993752 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Apr 30 00:13:47.993759 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Apr 30 00:13:47.993766 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 30 00:13:47.993772 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 00:13:47.993779 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 00:13:47.993786 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:13:47.993792 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 30 00:13:47.993799 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 00:13:47.993806 kernel: NX (Execute Disable) protection: active Apr 30 00:13:47.993815 kernel: APIC: Static calls initialized Apr 30 00:13:47.993822 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Apr 30 00:13:47.993834 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Apr 30 00:13:47.993841 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Apr 30 00:13:47.993848 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Apr 30 00:13:47.993854 kernel: extended physical RAM map: Apr 30 00:13:47.993861 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 00:13:47.993867 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 00:13:47.993874 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 00:13:47.993893 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 00:13:47.993906 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 00:13:47.993916 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 30 00:13:47.993923 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 30 00:13:47.993934 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Apr 30 00:13:47.993941 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Apr 30 00:13:47.993948 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Apr 30 00:13:47.993955 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Apr 30 00:13:47.993962 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Apr 30 00:13:47.993974 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 30 00:13:47.993981 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 30 00:13:47.993988 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 30 00:13:47.993995 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 30 00:13:47.994003 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 00:13:47.994010 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Apr 30 00:13:47.994017 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Apr 30 00:13:47.994024 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Apr 30 00:13:47.994031 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Apr 30 00:13:47.994043 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 30 00:13:47.994055 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 00:13:47.994062 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 00:13:47.994069 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:13:47.994079 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 30 00:13:47.994086 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 00:13:47.994093 kernel: efi: EFI v2.7 by EDK II Apr 30 00:13:47.994100 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Apr 30 00:13:47.994107 kernel: random: crng init done Apr 30 00:13:47.994114 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 30 00:13:47.994121 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 30 00:13:47.994134 kernel: secureboot: Secure boot disabled Apr 30 00:13:47.994141 kernel: SMBIOS 2.8 present. Apr 30 00:13:47.994151 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 30 00:13:47.994164 kernel: Hypervisor detected: KVM Apr 30 00:13:47.994171 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 00:13:47.994186 kernel: kvm-clock: using sched offset of 4585959979 cycles Apr 30 00:13:47.994202 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 00:13:47.994210 kernel: tsc: Detected 2794.748 MHz processor Apr 30 00:13:47.994218 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 00:13:47.994225 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 00:13:47.994233 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Apr 30 00:13:47.994249 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 00:13:47.994256 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 00:13:47.994267 kernel: Using GB pages for direct mapping Apr 30 00:13:47.994276 kernel: ACPI: Early table checksum verification disabled Apr 30 00:13:47.994283 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 30 00:13:47.994293 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:13:47.994301 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:13:47.994308 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:13:47.994315 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 30 00:13:47.994326 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:13:47.994333 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:13:47.994340 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:13:47.994348 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:13:47.994355 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 00:13:47.994362 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 30 00:13:47.994369 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Apr 30 00:13:47.994377 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 30 00:13:47.994386 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 30 00:13:47.994393 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 30 00:13:47.994401 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 30 00:13:47.994408 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 30 00:13:47.994415 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 30 00:13:47.994422 kernel: No NUMA configuration found Apr 30 00:13:47.994429 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 30 00:13:47.994436 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Apr 30 00:13:47.994443 kernel: Zone ranges: Apr 30 00:13:47.994451 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 00:13:47.994460 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 30 00:13:47.994468 kernel: Normal empty Apr 30 00:13:47.994477 kernel: Movable zone start for each node Apr 30 00:13:47.994484 kernel: Early memory node ranges Apr 30 00:13:47.994492 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 00:13:47.994499 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 30 00:13:47.994506 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 30 00:13:47.994513 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 30 00:13:47.994520 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 30 00:13:47.994530 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 30 00:13:47.994537 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Apr 30 00:13:47.994544 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Apr 30 00:13:47.994551 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 30 00:13:47.994558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:13:47.994566 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 00:13:47.994582 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 30 00:13:47.994592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:13:47.994599 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 30 00:13:47.994607 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 30 00:13:47.994614 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 30 00:13:47.994624 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 30 00:13:47.994634 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 30 00:13:47.994642 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 00:13:47.994657 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 00:13:47.994665 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 00:13:47.994673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 00:13:47.994683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 00:13:47.994690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 00:13:47.994698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 00:13:47.994705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 00:13:47.994713 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 00:13:47.994720 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 00:13:47.994728 kernel: TSC deadline timer available Apr 30 00:13:47.994735 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 00:13:47.994743 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 00:13:47.994753 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 00:13:47.994760 kernel: kvm-guest: setup PV sched yield Apr 30 00:13:47.994768 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 30 00:13:47.994775 kernel: Booting paravirtualized kernel on KVM Apr 30 00:13:47.994783 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 00:13:47.994791 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 00:13:47.994798 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 00:13:47.994806 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 00:13:47.994813 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 00:13:47.994823 kernel: kvm-guest: PV spinlocks enabled Apr 30 00:13:47.994831 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 00:13:47.994839 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:13:47.994847 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:13:47.994855 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:13:47.994865 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:13:47.994872 kernel: Fallback order for Node 0: 0 Apr 30 00:13:47.994923 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Apr 30 00:13:47.994934 kernel: Policy zone: DMA32 Apr 30 00:13:47.994942 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:13:47.994950 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 175776K reserved, 0K cma-reserved) Apr 30 00:13:47.994958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:13:47.994965 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 00:13:47.994973 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 00:13:47.994980 kernel: Dynamic Preempt: voluntary Apr 30 00:13:47.994988 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:13:47.995001 kernel: rcu: RCU event tracing is enabled. Apr 30 00:13:47.995012 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:13:47.995019 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:13:47.995027 kernel: Rude variant of Tasks RCU enabled. Apr 30 00:13:47.995035 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:13:47.995042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:13:47.995050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:13:47.995057 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 00:13:47.995065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:13:47.995073 kernel: Console: colour dummy device 80x25 Apr 30 00:13:47.995083 kernel: printk: console [ttyS0] enabled Apr 30 00:13:47.995090 kernel: ACPI: Core revision 20230628 Apr 30 00:13:47.995098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 00:13:47.995105 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 00:13:47.995113 kernel: x2apic enabled Apr 30 00:13:47.995120 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 00:13:47.995131 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 00:13:47.995139 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 00:13:47.995146 kernel: kvm-guest: setup PV IPIs Apr 30 00:13:47.995156 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 00:13:47.995164 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 00:13:47.995172 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 00:13:47.995179 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 00:13:47.995187 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 00:13:47.995201 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 00:13:47.995209 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 00:13:47.995217 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 00:13:47.995224 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 00:13:47.995243 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 00:13:47.995250 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 00:13:47.995258 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 00:13:47.995271 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 00:13:47.995279 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 00:13:47.995287 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 00:13:47.995295 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 00:13:47.995305 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 00:13:47.995313 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 00:13:47.995324 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 00:13:47.995332 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 00:13:47.995339 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 00:13:47.995347 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 00:13:47.995360 kernel: Freeing SMP alternatives memory: 32K Apr 30 00:13:47.995368 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:13:47.995375 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:13:47.995388 kernel: landlock: Up and running. Apr 30 00:13:47.995396 kernel: SELinux: Initializing. Apr 30 00:13:47.995407 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:13:47.995415 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:13:47.995430 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 00:13:47.995438 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:13:47.995446 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:13:47.995455 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:13:47.995463 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 00:13:47.995471 kernel: ... version: 0 Apr 30 00:13:47.995485 kernel: ... bit width: 48 Apr 30 00:13:47.995494 kernel: ... generic registers: 6 Apr 30 00:13:47.995503 kernel: ... value mask: 0000ffffffffffff Apr 30 00:13:47.995512 kernel: ... max period: 00007fffffffffff Apr 30 00:13:47.995520 kernel: ... fixed-purpose events: 0 Apr 30 00:13:47.995529 kernel: ... event mask: 000000000000003f Apr 30 00:13:47.995537 kernel: signal: max sigframe size: 1776 Apr 30 00:13:47.995544 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:13:47.995560 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:13:47.995576 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:13:47.995588 kernel: smpboot: x86: Booting SMP configuration: Apr 30 00:13:47.995596 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 00:13:47.995603 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:13:47.995611 kernel: smpboot: Max logical packages: 1 Apr 30 00:13:47.995618 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 00:13:47.995630 kernel: devtmpfs: initialized Apr 30 00:13:47.995643 kernel: x86/mm: Memory block size: 128MB Apr 30 00:13:47.995658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 30 00:13:47.995677 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 30 00:13:47.995688 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 30 00:13:47.995696 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 30 00:13:47.995704 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Apr 30 00:13:47.995711 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 30 00:13:47.995719 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:13:47.995727 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:13:47.995734 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:13:47.995742 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:13:47.995752 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:13:47.995760 kernel: audit: type=2000 audit(1745972026.720:1): state=initialized audit_enabled=0 res=1 Apr 30 00:13:47.995767 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:13:47.995775 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 00:13:47.995782 kernel: cpuidle: using governor menu Apr 30 00:13:47.995790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:13:47.995797 kernel: dca service started, version 1.12.1 Apr 30 00:13:47.995805 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 00:13:47.995813 kernel: PCI: Using configuration type 1 for base access Apr 30 00:13:47.995823 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 00:13:47.995831 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:13:47.995838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:13:47.995846 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:13:47.995853 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:13:47.995861 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:13:47.995868 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:13:47.995876 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:13:47.995895 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:13:47.995906 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:13:47.995913 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 00:13:47.995920 kernel: ACPI: Interpreter enabled Apr 30 00:13:47.995928 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 00:13:47.995935 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 00:13:47.995943 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 00:13:47.995950 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 00:13:47.995958 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 00:13:47.995965 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:13:47.996187 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:13:47.996325 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 00:13:47.996453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 00:13:47.996463 kernel: PCI host bridge to bus 0000:00 Apr 30 00:13:47.996621 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 00:13:47.996773 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 00:13:47.996904 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 00:13:47.997027 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 30 00:13:47.997188 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 30 00:13:47.997338 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 30 00:13:47.997455 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:13:47.997610 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 00:13:47.997803 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 00:13:47.997970 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 30 00:13:47.998096 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 30 00:13:47.998225 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 00:13:47.998366 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 30 00:13:47.998523 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 00:13:47.998721 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:13:47.999266 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 30 00:13:47.999400 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 30 00:13:47.999528 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Apr 30 00:13:47.999705 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 00:13:47.999836 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 30 00:13:47.999991 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 30 00:13:48.000153 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Apr 30 00:13:48.000328 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 00:13:48.000531 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 30 00:13:48.000685 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 30 00:13:48.000814 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 30 00:13:48.000958 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 30 00:13:48.001105 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 00:13:48.001243 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 00:13:48.001417 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 00:13:48.001557 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 30 00:13:48.001733 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 30 00:13:48.001964 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 00:13:48.002107 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 30 00:13:48.002119 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 00:13:48.002127 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 00:13:48.002135 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 00:13:48.002148 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 00:13:48.002156 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 00:13:48.002164 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 00:13:48.002172 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 00:13:48.002180 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 00:13:48.002187 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 00:13:48.002195 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 00:13:48.002203 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 00:13:48.002211 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 00:13:48.002221 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 00:13:48.002229 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 00:13:48.002236 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 00:13:48.002244 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 00:13:48.002252 kernel: iommu: Default domain type: Translated Apr 30 00:13:48.002259 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 00:13:48.002273 kernel: efivars: Registered efivars operations Apr 30 00:13:48.002284 kernel: PCI: Using ACPI for IRQ routing Apr 30 00:13:48.002291 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 00:13:48.002303 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 30 00:13:48.002310 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 30 00:13:48.002318 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Apr 30 00:13:48.002325 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Apr 30 00:13:48.002333 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 30 00:13:48.002340 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 30 00:13:48.002348 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Apr 30 00:13:48.002356 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 30 00:13:48.002491 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 00:13:48.002662 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 00:13:48.002816 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 00:13:48.002828 kernel: vgaarb: loaded Apr 30 00:13:48.002836 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 00:13:48.002844 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 00:13:48.002851 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 00:13:48.002859 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:13:48.002867 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:13:48.002879 kernel: pnp: PnP ACPI init Apr 30 00:13:48.003071 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 30 00:13:48.003083 kernel: pnp: PnP ACPI: found 6 devices Apr 30 00:13:48.003091 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 00:13:48.003099 kernel: NET: Registered PF_INET protocol family Apr 30 00:13:48.003138 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:13:48.003149 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:13:48.003157 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:13:48.003172 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:13:48.003182 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:13:48.003190 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:13:48.003198 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:13:48.003206 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:13:48.003218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:13:48.003229 kernel: NET: Registered PF_XDP protocol family Apr 30 00:13:48.003396 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 30 00:13:48.003550 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 30 00:13:48.003683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 00:13:48.003801 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 00:13:48.003955 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 00:13:48.004105 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 30 00:13:48.004222 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 30 00:13:48.004351 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 30 00:13:48.004364 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:13:48.004377 kernel: Initialise system trusted keyrings Apr 30 00:13:48.004394 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:13:48.004403 kernel: Key type asymmetric registered Apr 30 00:13:48.004417 kernel: Asymmetric key parser 'x509' registered Apr 30 00:13:48.004425 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 00:13:48.004433 kernel: io scheduler mq-deadline registered Apr 30 00:13:48.004446 kernel: io scheduler kyber registered Apr 30 00:13:48.004456 kernel: io scheduler bfq registered Apr 30 00:13:48.004464 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 00:13:48.004473 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 00:13:48.004484 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 00:13:48.004495 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 00:13:48.004503 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:13:48.004511 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 00:13:48.004520 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 00:13:48.004530 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 00:13:48.004538 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 00:13:48.004695 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 00:13:48.004708 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 00:13:48.004826 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 00:13:48.004993 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T00:13:47 UTC (1745972027) Apr 30 00:13:48.005129 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 30 00:13:48.005141 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 00:13:48.005154 kernel: efifb: probing for efifb Apr 30 00:13:48.005168 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 30 00:13:48.005184 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 30 00:13:48.005192 kernel: efifb: scrolling: redraw Apr 30 00:13:48.005203 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:13:48.005214 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:13:48.005222 kernel: fb0: EFI VGA frame buffer device Apr 30 00:13:48.005230 kernel: pstore: Using crash dump compression: deflate Apr 30 00:13:48.005238 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 00:13:48.005250 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:13:48.005258 kernel: Segment Routing with IPv6 Apr 30 00:13:48.005266 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:13:48.005274 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:13:48.005287 kernel: Key type dns_resolver registered Apr 30 00:13:48.005296 kernel: IPI shorthand broadcast: enabled Apr 30 00:13:48.005308 kernel: sched_clock: Marking stable (1243004084, 166824245)->(1475069649, -65241320) Apr 30 00:13:48.005317 kernel: registered taskstats version 1 Apr 30 00:13:48.005335 kernel: Loading compiled-in X.509 certificates Apr 30 00:13:48.005358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 00:13:48.005374 kernel: Key type .fscrypt registered Apr 30 00:13:48.005393 kernel: Key type fscrypt-provisioning registered Apr 30 00:13:48.005407 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:13:48.005424 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:13:48.005438 kernel: ima: No architecture policies found Apr 30 00:13:48.005454 kernel: clk: Disabling unused clocks Apr 30 00:13:48.005462 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 00:13:48.005470 kernel: Write protecting the kernel read-only data: 36864k Apr 30 00:13:48.005503 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 00:13:48.005513 kernel: Run /init as init process Apr 30 00:13:48.005521 kernel: with arguments: Apr 30 00:13:48.005536 kernel: /init Apr 30 00:13:48.005546 kernel: with environment: Apr 30 00:13:48.005563 kernel: HOME=/ Apr 30 00:13:48.005579 kernel: TERM=linux Apr 30 00:13:48.005602 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:13:48.005637 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:13:48.005682 systemd[1]: Detected virtualization kvm. Apr 30 00:13:48.005693 systemd[1]: Detected architecture x86-64. Apr 30 00:13:48.005701 systemd[1]: Running in initrd. Apr 30 00:13:48.005709 systemd[1]: No hostname configured, using default hostname. Apr 30 00:13:48.005718 systemd[1]: Hostname set to . Apr 30 00:13:48.005726 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:13:48.005739 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:13:48.005773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:13:48.005782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:13:48.005791 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:13:48.005800 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:13:48.005808 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:13:48.005817 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:13:48.005827 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:13:48.005839 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:13:48.005848 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:13:48.005856 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:13:48.005865 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:13:48.005873 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:13:48.005983 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:13:48.005992 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:13:48.006000 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:13:48.006012 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:13:48.006021 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:13:48.006030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:13:48.006038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:13:48.006047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:13:48.006055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:13:48.006064 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:13:48.006072 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:13:48.006080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:13:48.006091 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:13:48.006100 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:13:48.006108 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:13:48.006116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:13:48.006125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:48.006133 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:13:48.006142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:13:48.006150 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:13:48.006162 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:13:48.006193 systemd-journald[191]: Collecting audit messages is disabled. Apr 30 00:13:48.006216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:13:48.006225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:13:48.006233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:48.006242 systemd-journald[191]: Journal started Apr 30 00:13:48.006260 systemd-journald[191]: Runtime Journal (/run/log/journal/44c600e83c744b979c1016a56d62b4ca) is 6.0M, max 48.3M, 42.2M free. Apr 30 00:13:48.012105 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:13:48.007777 systemd-modules-load[194]: Inserted module 'overlay' Apr 30 00:13:48.014948 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:13:48.020267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:13:48.022633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:13:48.039325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:13:48.040369 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:13:48.047912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:13:48.050270 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 30 00:13:48.051194 kernel: Bridge firewalling registered Apr 30 00:13:48.052093 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:13:48.053477 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:13:48.056940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:13:48.064133 dracut-cmdline[223]: dracut-dracut-053 Apr 30 00:13:48.067722 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:13:48.072391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:13:48.079038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:13:48.111415 systemd-resolved[241]: Positive Trust Anchors: Apr 30 00:13:48.111434 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:13:48.111465 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:13:48.114088 systemd-resolved[241]: Defaulting to hostname 'linux'. Apr 30 00:13:48.115368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:13:48.121514 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:13:48.183930 kernel: SCSI subsystem initialized Apr 30 00:13:48.192933 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:13:48.205937 kernel: iscsi: registered transport (tcp) Apr 30 00:13:48.232208 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:13:48.232286 kernel: QLogic iSCSI HBA Driver Apr 30 00:13:48.287283 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:13:48.299051 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:13:48.327831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:13:48.327932 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:13:48.327946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:13:48.374050 kernel: raid6: avx2x4 gen() 21363 MB/s Apr 30 00:13:48.390943 kernel: raid6: avx2x2 gen() 20903 MB/s Apr 30 00:13:48.408263 kernel: raid6: avx2x1 gen() 17715 MB/s Apr 30 00:13:48.408352 kernel: raid6: using algorithm avx2x4 gen() 21363 MB/s Apr 30 00:13:48.426161 kernel: raid6: .... xor() 6883 MB/s, rmw enabled Apr 30 00:13:48.426279 kernel: raid6: using avx2x2 recovery algorithm Apr 30 00:13:48.448931 kernel: xor: automatically using best checksumming function avx Apr 30 00:13:48.612934 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:13:48.628717 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:13:48.641078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:13:48.662775 systemd-udevd[412]: Using default interface naming scheme 'v255'. Apr 30 00:13:48.670369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:13:48.679152 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:13:48.697410 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Apr 30 00:13:48.739293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:13:48.756104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:13:48.825618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:13:48.837122 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:13:48.850379 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:13:48.853843 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:13:48.857507 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:13:48.860435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:13:48.864912 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 00:13:48.886238 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:13:48.895663 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:13:48.895679 kernel: GPT:9289727 != 19775487 Apr 30 00:13:48.895697 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:13:48.895708 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 00:13:48.895719 kernel: GPT:9289727 != 19775487 Apr 30 00:13:48.895729 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:13:48.895739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:13:48.872238 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:13:48.896187 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:13:48.900470 kernel: libata version 3.00 loaded. Apr 30 00:13:48.911155 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 00:13:48.908778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:13:48.915385 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 00:13:48.955788 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 00:13:48.955809 kernel: AES CTR mode by8 optimization enabled Apr 30 00:13:48.955831 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 00:13:48.956045 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 00:13:48.956225 kernel: scsi host0: ahci Apr 30 00:13:48.956413 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (468) Apr 30 00:13:48.956429 kernel: scsi host1: ahci Apr 30 00:13:48.956612 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (471) Apr 30 00:13:48.956638 kernel: scsi host2: ahci Apr 30 00:13:48.956869 kernel: scsi host3: ahci Apr 30 00:13:48.958835 kernel: scsi host4: ahci Apr 30 00:13:48.959042 kernel: scsi host5: ahci Apr 30 00:13:48.959232 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 30 00:13:48.959248 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 30 00:13:48.959263 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 30 00:13:48.959277 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 30 00:13:48.959297 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 30 00:13:48.959312 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 30 00:13:48.909032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:13:48.911602 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:13:48.915648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:13:48.915832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:48.917318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:48.930269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:48.956935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:13:48.973790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:13:48.979808 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:13:48.994928 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:13:49.003590 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:13:49.013100 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:13:49.014461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:13:49.014532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:49.017357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:49.021142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:49.038489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:49.056090 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:13:49.085011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:13:49.267762 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 00:13:49.267861 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 00:13:49.267878 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 00:13:49.270915 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 00:13:49.270951 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 00:13:49.270966 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 00:13:49.271906 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 00:13:49.273027 kernel: ata3.00: applying bridge limits Apr 30 00:13:49.273907 kernel: ata3.00: configured for UDMA/100 Apr 30 00:13:49.275915 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:13:49.330007 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 00:13:49.351713 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:13:49.351733 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:13:49.410110 disk-uuid[555]: Primary Header is updated. Apr 30 00:13:49.410110 disk-uuid[555]: Secondary Entries is updated. Apr 30 00:13:49.410110 disk-uuid[555]: Secondary Header is updated. Apr 30 00:13:49.414906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:13:49.419912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:13:50.454913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:13:50.454979 disk-uuid[583]: The operation has completed successfully. Apr 30 00:13:50.487938 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:13:50.488118 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:13:50.510198 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:13:50.513629 sh[598]: Success Apr 30 00:13:50.526919 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 00:13:50.567342 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:13:50.592117 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:13:50.596170 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:13:50.625115 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 00:13:50.625199 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:13:50.625211 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:13:50.627019 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:13:50.627040 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:13:50.631993 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:13:50.632774 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:13:50.640018 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:13:50.642766 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:13:50.651460 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:13:50.651489 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:13:50.651508 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:13:50.653987 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:13:50.663923 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:13:50.666161 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:13:51.141309 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:13:51.144439 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:13:51.162185 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:13:51.167555 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:13:51.218672 systemd-networkd[780]: lo: Link UP Apr 30 00:13:51.218683 systemd-networkd[780]: lo: Gained carrier Apr 30 00:13:51.222036 systemd-networkd[780]: Enumeration completed Apr 30 00:13:51.222489 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:13:51.222493 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:13:51.224237 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:13:51.225633 systemd-networkd[780]: eth0: Link UP Apr 30 00:13:51.225639 systemd-networkd[780]: eth0: Gained carrier Apr 30 00:13:51.225652 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:13:51.230043 systemd[1]: Reached target network.target - Network. Apr 30 00:13:51.271983 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:13:51.338229 ignition[778]: Ignition 2.20.0 Apr 30 00:13:51.338243 ignition[778]: Stage: fetch-offline Apr 30 00:13:51.338305 ignition[778]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:13:51.338320 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:13:51.338453 ignition[778]: parsed url from cmdline: "" Apr 30 00:13:51.338459 ignition[778]: no config URL provided Apr 30 00:13:51.338466 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:13:51.338484 ignition[778]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:13:51.338526 ignition[778]: op(1): [started] loading QEMU firmware config module Apr 30 00:13:51.338534 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:13:51.353451 ignition[778]: op(1): [finished] loading QEMU firmware config module Apr 30 00:13:51.392681 ignition[778]: parsing config with SHA512: a0fdc2d064f23f2b049a712cc4d8d42fd7489fb5559ad5648ad4871d337108397e29110904f8243babff2c9d298b9587e1ff48f2da14c7efb16b941f83ff477e Apr 30 00:13:51.396539 unknown[778]: fetched base config from "system" Apr 30 00:13:51.396562 unknown[778]: fetched user config from "qemu" Apr 30 00:13:51.399834 ignition[778]: fetch-offline: fetch-offline passed Apr 30 00:13:51.400992 ignition[778]: Ignition finished successfully Apr 30 00:13:51.404297 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:13:51.406844 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:13:51.421065 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:13:51.444073 ignition[792]: Ignition 2.20.0 Apr 30 00:13:51.444087 ignition[792]: Stage: kargs Apr 30 00:13:51.444288 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:13:51.444303 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:13:51.467350 ignition[792]: kargs: kargs passed Apr 30 00:13:51.467415 ignition[792]: Ignition finished successfully Apr 30 00:13:51.471868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:13:51.484024 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:13:51.531860 ignition[801]: Ignition 2.20.0 Apr 30 00:13:51.531872 ignition[801]: Stage: disks Apr 30 00:13:51.532066 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:13:51.532078 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:13:51.532982 ignition[801]: disks: disks passed Apr 30 00:13:51.533027 ignition[801]: Ignition finished successfully Apr 30 00:13:51.539096 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:13:51.540384 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:13:51.542327 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:13:51.542542 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:13:51.542875 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:13:51.543365 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:13:51.582329 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:13:51.624800 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:13:51.801481 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:13:51.814001 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:13:51.965915 kernel: EXT4-fs (vda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 00:13:51.966142 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:13:51.967841 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:13:51.978994 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:13:51.981652 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:13:51.984220 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:13:51.984274 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:13:51.991929 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) Apr 30 00:13:51.991950 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:13:51.991964 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:13:51.984298 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:13:51.996659 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:13:51.996679 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:13:51.993844 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:13:51.997823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:13:52.001096 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:13:52.104156 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:13:52.110113 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:13:52.117685 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:13:52.124007 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:13:52.238217 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:13:52.249064 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:13:52.253699 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:13:52.259449 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:13:52.261261 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:13:52.279078 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:13:52.332494 ignition[935]: INFO : Ignition 2.20.0 Apr 30 00:13:52.332494 ignition[935]: INFO : Stage: mount Apr 30 00:13:52.335072 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:13:52.335072 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:13:52.335072 ignition[935]: INFO : mount: mount passed Apr 30 00:13:52.335072 ignition[935]: INFO : Ignition finished successfully Apr 30 00:13:52.335426 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:13:52.345101 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:13:52.984074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:13:52.994845 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (947) Apr 30 00:13:52.994892 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:13:52.994903 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:13:52.996915 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:13:52.999924 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:13:53.001139 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:13:53.024126 ignition[964]: INFO : Ignition 2.20.0 Apr 30 00:13:53.024126 ignition[964]: INFO : Stage: files Apr 30 00:13:53.026178 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:13:53.026178 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:13:53.026178 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:13:53.030278 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:13:53.030278 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:13:53.048217 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:13:53.049856 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:13:53.051520 unknown[964]: wrote ssh authorized keys file for user: core Apr 30 00:13:53.052855 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:13:53.054248 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:13:53.054248 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:13:53.054248 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:13:53.054248 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 00:13:53.103475 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:13:53.164056 systemd-networkd[780]: eth0: Gained IPv6LL Apr 30 00:13:53.227560 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:13:53.229993 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:13:53.229993 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:13:53.229993 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:13:53.229993 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:13:53.229993 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:13:53.240729 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 00:13:53.708659 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:13:54.378112 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:13:54.378112 ignition[964]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 00:13:54.613756 ignition[964]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 30 00:13:54.616274 ignition[964]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:13:54.649010 ignition[964]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:13:54.654000 ignition[964]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:13:54.655621 ignition[964]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:13:54.655621 ignition[964]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:13:54.655621 ignition[964]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:13:54.655621 ignition[964]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:13:54.655621 ignition[964]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:13:54.655621 ignition[964]: INFO : files: files passed Apr 30 00:13:54.655621 ignition[964]: INFO : Ignition finished successfully Apr 30 00:13:54.657910 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:13:54.670149 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:13:54.672400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:13:54.674572 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:13:54.674689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:13:54.686584 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:13:54.691110 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:13:54.691110 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:13:54.694708 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:13:54.694219 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:13:54.696230 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:13:54.713129 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:13:54.774851 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:13:54.775086 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:13:54.778459 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:13:54.779958 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:13:54.782837 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:13:54.804438 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:13:54.830388 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:13:54.849277 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:13:54.865423 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:13:54.865721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:13:54.866608 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:13:54.867335 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:13:54.867551 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:13:54.868189 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:13:54.868601 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:13:54.868999 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:13:54.869387 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:13:54.869762 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:13:54.870392 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:13:54.870765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:13:54.871420 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:13:54.872988 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:13:54.926108 ignition[1018]: INFO : Ignition 2.20.0 Apr 30 00:13:54.926108 ignition[1018]: INFO : Stage: umount Apr 30 00:13:54.926108 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:13:54.926108 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:13:54.926108 ignition[1018]: INFO : umount: umount passed Apr 30 00:13:54.926108 ignition[1018]: INFO : Ignition finished successfully Apr 30 00:13:54.873377 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:13:54.873738 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:13:54.873924 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:13:54.874753 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:13:54.875334 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:13:54.875677 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:13:54.875840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:13:54.876279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:13:54.876453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:13:54.877071 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:13:54.877230 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:13:54.877797 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:13:54.878310 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:13:54.884286 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:13:54.887305 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:13:54.887676 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:13:54.890490 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:13:54.891438 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:13:54.892237 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:13:54.892400 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:13:54.894653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:13:54.894834 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:13:54.895479 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:13:54.895677 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:13:54.898008 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:13:54.899934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:13:54.902389 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:13:54.902723 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:13:54.904355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:13:54.904555 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:13:54.914188 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:13:54.914360 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:13:54.927479 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:13:54.927679 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:13:54.949323 systemd[1]: Stopped target network.target - Network. Apr 30 00:13:54.951118 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:13:54.951215 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:13:54.951564 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:13:54.951628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:13:54.959163 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:13:54.959262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:13:54.961551 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:13:54.961620 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:13:54.964618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:13:54.968134 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:13:54.969513 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 30 00:13:54.973809 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:13:54.974058 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:13:54.976318 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:13:54.976537 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:13:54.981512 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:13:54.981589 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:13:54.991094 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:13:54.994187 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:13:54.995636 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:13:54.999769 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:13:55.000917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:13:55.003577 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:13:55.004714 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:13:55.030543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:13:55.030659 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:13:55.034831 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:13:55.038616 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:13:55.039609 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:13:55.039793 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:13:55.050941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:13:55.051068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:13:55.055493 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:13:55.056681 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:13:55.059856 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:13:55.061005 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:13:55.063809 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:13:55.065025 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:13:55.067270 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:13:55.067323 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:13:55.070436 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:13:55.070513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:13:55.073749 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:13:55.073810 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:13:55.076929 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:13:55.076994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:13:55.092099 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:13:55.093360 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:13:55.094679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:13:55.098411 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:13:55.098484 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:13:55.102431 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:13:55.102503 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:13:55.105945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:13:55.106009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:55.110223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:13:55.111368 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:13:55.114386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:13:55.128114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:13:55.135445 systemd[1]: Switching root. Apr 30 00:13:55.163500 systemd-journald[191]: Journal stopped Apr 30 00:13:56.915769 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Apr 30 00:13:56.915881 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:13:56.915912 kernel: SELinux: policy capability open_perms=1 Apr 30 00:13:56.915930 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:13:56.915942 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:13:56.915964 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:13:56.915989 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:13:56.916001 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:13:56.916012 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:13:56.916024 kernel: audit: type=1403 audit(1745972035.900:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:13:56.916038 systemd[1]: Successfully loaded SELinux policy in 43.029ms. Apr 30 00:13:56.916063 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.609ms. Apr 30 00:13:56.916084 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:13:56.916097 systemd[1]: Detected virtualization kvm. Apr 30 00:13:56.916116 systemd[1]: Detected architecture x86-64. Apr 30 00:13:56.916128 systemd[1]: Detected first boot. Apr 30 00:13:56.916140 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:13:56.916153 zram_generator::config[1081]: No configuration found. Apr 30 00:13:56.916166 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:13:56.916178 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:13:56.916199 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:13:56.916213 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:13:56.916226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:13:56.916239 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:13:56.916251 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:13:56.916264 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:13:56.916276 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:13:56.916291 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:13:56.916311 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:13:56.916328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:13:56.916341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:13:56.916354 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:13:56.916366 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:13:56.916378 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:13:56.916391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:13:56.916403 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:13:56.916429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:13:56.916448 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:13:56.916460 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:13:56.916472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:13:56.916485 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:13:56.916498 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:13:56.916511 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:13:56.916523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:13:56.916535 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:13:56.916553 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:13:56.916565 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:13:56.916578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:13:56.916590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:13:56.916602 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:13:56.916614 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:13:56.916627 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:13:56.916639 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:13:56.916651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:13:56.916669 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:13:56.916681 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:13:56.916693 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:13:56.916706 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:13:56.916719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:13:56.916731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:13:56.916744 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:13:56.916755 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:13:56.916769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:13:56.916787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:13:56.916799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:13:56.916818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:13:56.916842 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:13:56.916856 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:13:56.916868 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:13:56.916880 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:13:56.916909 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:13:56.916930 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:13:56.916942 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:13:56.916954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:13:56.916974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:13:56.916997 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:13:56.917009 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:13:56.917021 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:13:56.917033 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:13:56.917045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:13:56.917066 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:13:56.917078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:13:56.917091 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:13:56.917105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:13:56.917123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:13:56.917135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:13:56.917147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:13:56.917159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:13:56.917171 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:13:56.917183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:13:56.917195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:13:56.917207 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:13:56.917220 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:13:56.917263 systemd-journald[1158]: Collecting audit messages is disabled. Apr 30 00:13:56.917286 kernel: loop: module loaded Apr 30 00:13:56.917298 kernel: fuse: init (API version 7.39) Apr 30 00:13:56.917310 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:13:56.917322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:13:56.917335 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:13:56.917347 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:13:56.917365 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:13:56.917377 systemd-journald[1158]: Journal started Apr 30 00:13:56.917400 systemd-journald[1158]: Runtime Journal (/run/log/journal/44c600e83c744b979c1016a56d62b4ca) is 6.0M, max 48.3M, 42.2M free. Apr 30 00:13:56.897144 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Apr 30 00:13:56.897163 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Apr 30 00:13:56.930337 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:13:56.932436 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:13:56.932709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:13:56.934563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:13:56.937376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:13:56.940180 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:13:56.965149 kernel: ACPI: bus type drm_connector registered Apr 30 00:13:56.965545 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:13:56.965901 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:13:56.982226 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:13:56.992016 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:13:56.995335 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:13:56.997011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:13:57.000439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:13:57.003080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:13:57.025431 systemd-journald[1158]: Time spent on flushing to /var/log/journal/44c600e83c744b979c1016a56d62b4ca is 19.303ms for 1037 entries. Apr 30 00:13:57.025431 systemd-journald[1158]: System Journal (/var/log/journal/44c600e83c744b979c1016a56d62b4ca) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:13:57.743578 systemd-journald[1158]: Received client request to flush runtime journal. Apr 30 00:13:57.029600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:13:57.052110 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:13:57.054201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:13:57.074282 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:13:57.219022 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:13:57.222769 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:13:57.511441 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:13:57.560237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:13:57.724851 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:13:57.734140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:13:57.746261 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:13:57.854938 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 30 00:13:57.854990 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 30 00:13:57.863042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:13:58.436590 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:13:58.449028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:13:58.477001 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Apr 30 00:13:58.495874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:13:58.545119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1252) Apr 30 00:13:58.547979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:13:58.601243 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 00:13:58.724979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 00:13:58.730913 kernel: ACPI: button: Power Button [PWRF] Apr 30 00:13:58.747867 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 00:13:58.746193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:13:58.756501 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 30 00:13:58.758017 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 00:13:58.758244 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 00:13:58.758513 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 00:13:58.756409 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:13:58.772065 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:13:58.798095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:58.909625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:13:58.910301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:58.927048 kernel: kvm_amd: TSC scaling supported Apr 30 00:13:58.927258 kernel: kvm_amd: Nested Virtualization enabled Apr 30 00:13:58.927336 kernel: kvm_amd: Nested Paging enabled Apr 30 00:13:58.927383 kernel: kvm_amd: LBR virtualization supported Apr 30 00:13:58.927427 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 00:13:58.927484 kernel: kvm_amd: Virtual GIF supported Apr 30 00:13:58.921131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:13:58.954916 kernel: EDAC MC: Ver: 3.0.0 Apr 30 00:13:58.960461 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:13:58.989506 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:13:58.995847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:13:58.999543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:13:59.010947 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:13:59.029798 systemd-networkd[1264]: lo: Link UP Apr 30 00:13:59.029809 systemd-networkd[1264]: lo: Gained carrier Apr 30 00:13:59.031645 systemd-networkd[1264]: Enumeration completed Apr 30 00:13:59.031836 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:13:59.032086 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:13:59.032097 systemd-networkd[1264]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:13:59.033219 systemd-networkd[1264]: eth0: Link UP Apr 30 00:13:59.033231 systemd-networkd[1264]: eth0: Gained carrier Apr 30 00:13:59.033246 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:13:59.045314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:13:59.050330 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:13:59.053646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:13:59.055050 systemd-networkd[1264]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:13:59.057750 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:13:59.069073 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:13:59.286253 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:13:59.288319 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:13:59.289817 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:13:59.289869 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:13:59.291004 systemd[1]: Reached target machines.target - Containers. Apr 30 00:13:59.293444 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:13:59.305180 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:13:59.309499 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:13:59.310924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:13:59.312391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:13:59.318043 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:13:59.324543 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:13:59.327665 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:13:59.341278 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:13:59.342911 kernel: loop0: detected capacity change from 0 to 140992 Apr 30 00:13:59.368955 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:13:59.380957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:13:59.382530 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:13:59.398002 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 00:13:59.434916 kernel: loop2: detected capacity change from 0 to 138184 Apr 30 00:13:59.514925 kernel: loop3: detected capacity change from 0 to 140992 Apr 30 00:13:59.529930 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 00:13:59.537911 kernel: loop5: detected capacity change from 0 to 138184 Apr 30 00:13:59.547712 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:13:59.548392 (sd-merge)[1319]: Merged extensions into '/usr'. Apr 30 00:13:59.553625 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:13:59.553649 systemd[1]: Reloading... Apr 30 00:13:59.651959 zram_generator::config[1344]: No configuration found. Apr 30 00:13:59.769676 ldconfig[1303]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:13:59.805814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:13:59.874392 systemd[1]: Reloading finished in 320 ms. Apr 30 00:13:59.892521 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:13:59.894335 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:13:59.912186 systemd[1]: Starting ensure-sysext.service... Apr 30 00:13:59.915996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:13:59.920469 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:13:59.920490 systemd[1]: Reloading... Apr 30 00:13:59.962862 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:13:59.963369 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:13:59.964642 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:13:59.965709 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Apr 30 00:13:59.966067 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Apr 30 00:13:59.972977 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:13:59.972998 systemd-tmpfiles[1392]: Skipping /boot Apr 30 00:13:59.993928 zram_generator::config[1423]: No configuration found. Apr 30 00:13:59.998948 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:13:59.999104 systemd-tmpfiles[1392]: Skipping /boot Apr 30 00:14:00.115035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:14:00.195750 systemd[1]: Reloading finished in 274 ms. Apr 30 00:14:00.222359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:14:00.241951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:14:00.245494 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:14:00.248875 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:14:00.254051 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:14:00.259087 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:14:00.267025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:14:00.267249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:14:00.272509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:14:00.284014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:14:00.288614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:14:00.290103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:14:00.290364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:14:00.294267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:14:00.294593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:14:00.304570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:14:00.304909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:14:00.308279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:14:00.309842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:14:00.310129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:14:00.313695 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:14:00.314030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:14:00.320245 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:14:00.328388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:14:00.328689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:14:00.330659 augenrules[1499]: No rules Apr 30 00:14:00.333284 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:14:00.333746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:14:00.336125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:14:00.336540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:14:00.341190 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:14:00.353669 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:14:00.364176 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:14:00.365480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:14:00.369170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:14:00.373123 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:14:00.380233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:14:00.387008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:14:00.387012 systemd-resolved[1468]: Positive Trust Anchors: Apr 30 00:14:00.387027 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:14:00.387066 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:14:00.388615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:14:00.395605 systemd-resolved[1468]: Defaulting to hostname 'linux'. Apr 30 00:14:00.398276 augenrules[1514]: /sbin/augenrules: No change Apr 30 00:14:00.405297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:14:00.405980 augenrules[1540]: No rules Apr 30 00:14:00.406624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:14:00.408376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:14:00.410786 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:14:00.412930 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:14:00.413309 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:14:00.414875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:14:00.415165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:14:00.416921 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:14:00.417183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:14:00.419003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:14:00.419285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:14:00.421340 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:14:00.421653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:14:00.426702 systemd[1]: Finished ensure-sysext.service. Apr 30 00:14:00.428488 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:14:00.437023 systemd[1]: Reached target network.target - Network. Apr 30 00:14:00.438070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:14:00.439457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:14:00.439539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:14:00.455110 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:14:00.456407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:14:00.521256 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:14:00.522440 systemd-timesyncd[1558]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:14:00.522488 systemd-timesyncd[1558]: Initial clock synchronization to Wed 2025-04-30 00:14:00.904413 UTC. Apr 30 00:14:00.523383 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:14:00.524592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:14:00.525929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:14:00.527237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:14:00.528610 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:14:00.528649 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:14:00.529658 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:14:00.531012 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:14:00.532240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:14:00.533492 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:14:00.535345 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:14:00.538830 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:14:00.541722 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:14:00.550541 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:14:00.551707 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:14:00.552720 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:14:00.553863 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:14:00.553930 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:14:00.553961 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:14:00.555831 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:14:00.559052 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:14:00.562146 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:14:00.568053 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:14:00.569352 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:14:00.572042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:14:00.573675 jq[1564]: false Apr 30 00:14:00.579779 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:14:00.584301 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:14:00.590241 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:14:00.590680 extend-filesystems[1566]: Found loop3 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found loop4 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found loop5 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found sr0 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda1 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda2 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda3 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found usr Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda4 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda6 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda7 Apr 30 00:14:00.592776 extend-filesystems[1566]: Found vda9 Apr 30 00:14:00.592776 extend-filesystems[1566]: Checking size of /dev/vda9 Apr 30 00:14:00.604046 dbus-daemon[1563]: [system] SELinux support is enabled Apr 30 00:14:00.599755 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:14:00.608062 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:14:00.621643 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:14:00.628025 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:14:00.630451 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:14:00.637755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:14:00.638193 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:14:00.638696 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:14:00.639135 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:14:00.640508 jq[1588]: true Apr 30 00:14:00.652252 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:14:00.652588 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:14:00.656893 update_engine[1587]: I20250430 00:14:00.656802 1587 main.cc:92] Flatcar Update Engine starting Apr 30 00:14:00.663197 extend-filesystems[1566]: Resized partition /dev/vda9 Apr 30 00:14:00.667448 update_engine[1587]: I20250430 00:14:00.666409 1587 update_check_scheduler.cc:74] Next update check in 5m50s Apr 30 00:14:00.673215 extend-filesystems[1599]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:14:00.673791 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:14:00.682350 jq[1594]: true Apr 30 00:14:00.687922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1252) Apr 30 00:14:00.710024 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:14:00.722984 tar[1591]: linux-amd64/helm Apr 30 00:14:00.724586 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:14:00.781061 systemd-networkd[1264]: eth0: Gained IPv6LL Apr 30 00:14:00.799068 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:14:00.867760 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:14:00.867849 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:14:00.869595 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:14:00.869613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:14:00.872177 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:14:00.879032 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:14:00.881004 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:14:00.882999 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:14:00.895273 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:14:00.903794 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:14:00.906732 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:14:00.987834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:00.992982 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:14:00.997510 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:14:00.998190 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:14:01.040152 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:14:01.051056 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:14:01.051537 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:14:01.053319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:14:01.138251 systemd-logind[1582]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 00:14:01.138283 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:14:01.143186 systemd-logind[1582]: New seat seat0. Apr 30 00:14:01.144965 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:14:01.155790 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:14:01.167380 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:14:01.210640 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:14:01.218038 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:14:01.228736 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:14:01.228454 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:14:01.740078 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:14:01.740078 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:14:01.740078 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:14:01.749908 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Apr 30 00:14:01.747378 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:14:01.752707 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:14:01.759863 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:14:01.774317 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:14:01.788407 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:39158.service - OpenSSH per-connection server daemon (10.0.0.1:39158). Apr 30 00:14:01.801774 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:14:01.805575 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:14:01.808523 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:14:02.094533 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 39158 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:02.098208 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:02.111606 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:14:02.128505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:14:02.132319 systemd-logind[1582]: New session 1 of user core. Apr 30 00:14:02.215535 tar[1591]: linux-amd64/LICENSE Apr 30 00:14:02.218519 tar[1591]: linux-amd64/README.md Apr 30 00:14:02.247692 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:14:02.251068 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:14:02.258025 containerd[1596]: time="2025-04-30T00:14:02.257771373Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:14:02.269559 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:14:02.293995 containerd[1596]: time="2025-04-30T00:14:02.293668468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.294170 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:14:02.302274 containerd[1596]: time="2025-04-30T00:14:02.301833803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:14:02.302274 containerd[1596]: time="2025-04-30T00:14:02.301922083Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:14:02.302274 containerd[1596]: time="2025-04-30T00:14:02.301953197Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:14:02.302580 containerd[1596]: time="2025-04-30T00:14:02.302556275Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:14:02.302695 containerd[1596]: time="2025-04-30T00:14:02.302676786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.302856196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.302878605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.303272464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.303292473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.303309110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.303320821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.303482132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.303860199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.304098434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.304116971Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:14:02.304540 containerd[1596]: time="2025-04-30T00:14:02.304254558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:14:02.304903 containerd[1596]: time="2025-04-30T00:14:02.304335281Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:14:02.378346 containerd[1596]: time="2025-04-30T00:14:02.378128290Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:14:02.378591 containerd[1596]: time="2025-04-30T00:14:02.378568961Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:14:02.378938 containerd[1596]: time="2025-04-30T00:14:02.378913816Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:14:02.379087 containerd[1596]: time="2025-04-30T00:14:02.379063781Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:14:02.379209 containerd[1596]: time="2025-04-30T00:14:02.379184021Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:14:02.379557 containerd[1596]: time="2025-04-30T00:14:02.379534220Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:14:02.380229 containerd[1596]: time="2025-04-30T00:14:02.380201988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:14:02.380488 containerd[1596]: time="2025-04-30T00:14:02.380465347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:14:02.380567 containerd[1596]: time="2025-04-30T00:14:02.380549954Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:14:02.380664 containerd[1596]: time="2025-04-30T00:14:02.380644507Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:14:02.380749 containerd[1596]: time="2025-04-30T00:14:02.380721942Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.380825 containerd[1596]: time="2025-04-30T00:14:02.380807718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381034 containerd[1596]: time="2025-04-30T00:14:02.380999277Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381159 containerd[1596]: time="2025-04-30T00:14:02.381124912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381243 containerd[1596]: time="2025-04-30T00:14:02.381225122Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381317 containerd[1596]: time="2025-04-30T00:14:02.381301212Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381423 containerd[1596]: time="2025-04-30T00:14:02.381403561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381496 containerd[1596]: time="2025-04-30T00:14:02.381479525Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:14:02.381604 containerd[1596]: time="2025-04-30T00:14:02.381568777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.381718 containerd[1596]: time="2025-04-30T00:14:02.381687169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.381808 containerd[1596]: time="2025-04-30T00:14:02.381788986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.381892 containerd[1596]: time="2025-04-30T00:14:02.381873947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382002 containerd[1596]: time="2025-04-30T00:14:02.381982445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382090 containerd[1596]: time="2025-04-30T00:14:02.382069734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382253 containerd[1596]: time="2025-04-30T00:14:02.382227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382341 containerd[1596]: time="2025-04-30T00:14:02.382322405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382465 containerd[1596]: time="2025-04-30T00:14:02.382440578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382546 containerd[1596]: time="2025-04-30T00:14:02.382528983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382616 containerd[1596]: time="2025-04-30T00:14:02.382600720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382687 containerd[1596]: time="2025-04-30T00:14:02.382671433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382795 containerd[1596]: time="2025-04-30T00:14:02.382776686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.382907 containerd[1596]: time="2025-04-30T00:14:02.382881978Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:14:02.383253 containerd[1596]: time="2025-04-30T00:14:02.383209298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.383350 containerd[1596]: time="2025-04-30T00:14:02.383332533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.383449 containerd[1596]: time="2025-04-30T00:14:02.383431094Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:14:02.383582 containerd[1596]: time="2025-04-30T00:14:02.383562857Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:14:02.383687 containerd[1596]: time="2025-04-30T00:14:02.383655793Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:14:02.383758 containerd[1596]: time="2025-04-30T00:14:02.383741620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:14:02.383835 containerd[1596]: time="2025-04-30T00:14:02.383814933Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:14:02.383934 containerd[1596]: time="2025-04-30T00:14:02.383899936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.384035 containerd[1596]: time="2025-04-30T00:14:02.384002859Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:14:02.384108 containerd[1596]: time="2025-04-30T00:14:02.384092100Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:14:02.384172 containerd[1596]: time="2025-04-30T00:14:02.384156697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:14:02.384740 containerd[1596]: time="2025-04-30T00:14:02.384674000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:14:02.385120 containerd[1596]: time="2025-04-30T00:14:02.385097240Z" level=info msg="Connect containerd service" Apr 30 00:14:02.385239 containerd[1596]: time="2025-04-30T00:14:02.385219598Z" level=info msg="using legacy CRI server" Apr 30 00:14:02.385299 containerd[1596]: time="2025-04-30T00:14:02.385286816Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:14:02.385522 containerd[1596]: time="2025-04-30T00:14:02.385500846Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:14:02.412499 containerd[1596]: time="2025-04-30T00:14:02.412436040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.412833290Z" level=info msg="Start subscribing containerd event" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413107629Z" level=info msg="Start recovering state" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413172091Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413239892Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413292122Z" level=info msg="Start event monitor" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413311013Z" level=info msg="Start snapshots syncer" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413324457Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413336533Z" level=info msg="Start streaming server" Apr 30 00:14:02.413610 containerd[1596]: time="2025-04-30T00:14:02.413447368Z" level=info msg="containerd successfully booted in 0.158032s" Apr 30 00:14:02.419242 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:14:02.549095 systemd[1694]: Queued start job for default target default.target. Apr 30 00:14:02.549604 systemd[1694]: Created slice app.slice - User Application Slice. Apr 30 00:14:02.549624 systemd[1694]: Reached target paths.target - Paths. Apr 30 00:14:02.549638 systemd[1694]: Reached target timers.target - Timers. Apr 30 00:14:02.561205 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:14:02.571217 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:14:02.571429 systemd[1694]: Reached target sockets.target - Sockets. Apr 30 00:14:02.571445 systemd[1694]: Reached target basic.target - Basic System. Apr 30 00:14:02.571730 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:14:02.571871 systemd[1694]: Reached target default.target - Main User Target. Apr 30 00:14:02.571917 systemd[1694]: Startup finished in 261ms. Apr 30 00:14:02.574804 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:14:02.667627 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:39172.service - OpenSSH per-connection server daemon (10.0.0.1:39172). Apr 30 00:14:02.713900 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 39172 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:02.715821 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:02.722038 systemd-logind[1582]: New session 2 of user core. Apr 30 00:14:02.732564 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:14:02.815182 sshd[1713]: Connection closed by 10.0.0.1 port 39172 Apr 30 00:14:02.815701 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:02.905780 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:39188.service - OpenSSH per-connection server daemon (10.0.0.1:39188). Apr 30 00:14:02.908475 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:39172.service: Deactivated successfully. Apr 30 00:14:02.912528 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:14:02.913764 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:14:02.915856 systemd-logind[1582]: Removed session 2. Apr 30 00:14:02.950552 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 39188 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:02.953540 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:02.958956 systemd-logind[1582]: New session 3 of user core. Apr 30 00:14:02.970559 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:14:03.034940 sshd[1721]: Connection closed by 10.0.0.1 port 39188 Apr 30 00:14:03.035873 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:03.048978 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:39188.service: Deactivated successfully. Apr 30 00:14:03.052706 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:14:03.053739 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:14:03.055237 systemd-logind[1582]: Removed session 3. Apr 30 00:14:03.926036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:03.928872 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:14:03.930781 systemd[1]: Startup finished in 9.540s (kernel) + 8.072s (userspace) = 17.612s. Apr 30 00:14:03.968896 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:14:05.121125 kubelet[1734]: E0430 00:14:05.120970 1734 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:14:05.127111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:14:05.127655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:14:13.263461 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:34728.service - OpenSSH per-connection server daemon (10.0.0.1:34728). Apr 30 00:14:13.307932 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 34728 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:13.309867 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:13.315770 systemd-logind[1582]: New session 4 of user core. Apr 30 00:14:13.330491 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:14:13.387978 sshd[1751]: Connection closed by 10.0.0.1 port 34728 Apr 30 00:14:13.388421 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:13.400394 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:34742.service - OpenSSH per-connection server daemon (10.0.0.1:34742). Apr 30 00:14:13.401020 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:34728.service: Deactivated successfully. Apr 30 00:14:13.404181 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:14:13.405196 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:14:13.406476 systemd-logind[1582]: Removed session 4. Apr 30 00:14:13.438915 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 34742 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:13.440885 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:13.446716 systemd-logind[1582]: New session 5 of user core. Apr 30 00:14:13.456442 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:14:13.508963 sshd[1759]: Connection closed by 10.0.0.1 port 34742 Apr 30 00:14:13.509501 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:13.525472 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:34744.service - OpenSSH per-connection server daemon (10.0.0.1:34744). Apr 30 00:14:13.526753 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:34742.service: Deactivated successfully. Apr 30 00:14:13.529866 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:14:13.530638 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:14:13.532823 systemd-logind[1582]: Removed session 5. Apr 30 00:14:13.567682 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 34744 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:13.569683 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:13.575444 systemd-logind[1582]: New session 6 of user core. Apr 30 00:14:13.596448 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:14:13.656640 sshd[1767]: Connection closed by 10.0.0.1 port 34744 Apr 30 00:14:13.657563 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:13.667945 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:34760.service - OpenSSH per-connection server daemon (10.0.0.1:34760). Apr 30 00:14:13.668794 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:34744.service: Deactivated successfully. Apr 30 00:14:13.676142 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:14:13.678982 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:14:13.687274 systemd-logind[1582]: Removed session 6. Apr 30 00:14:13.745505 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 34760 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:13.744015 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:13.769140 systemd-logind[1582]: New session 7 of user core. Apr 30 00:14:13.807674 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:14:13.948457 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:14:13.948990 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:14:13.989451 sudo[1776]: pam_unix(sudo:session): session closed for user root Apr 30 00:14:14.009796 sshd[1775]: Connection closed by 10.0.0.1 port 34760 Apr 30 00:14:14.005706 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:14.019476 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:34776.service - OpenSSH per-connection server daemon (10.0.0.1:34776). Apr 30 00:14:14.020376 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:34760.service: Deactivated successfully. Apr 30 00:14:14.031429 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:14:14.036545 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:14:14.040365 systemd-logind[1582]: Removed session 7. Apr 30 00:14:14.095814 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 34776 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:14.100522 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:14.119269 systemd-logind[1582]: New session 8 of user core. Apr 30 00:14:14.128008 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:14:14.205507 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:14:14.206135 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:14:14.231382 sudo[1786]: pam_unix(sudo:session): session closed for user root Apr 30 00:14:14.246571 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:14:14.248581 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:14:14.309656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:14:14.410380 augenrules[1808]: No rules Apr 30 00:14:14.412925 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:14:14.413407 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:14:14.421829 sudo[1785]: pam_unix(sudo:session): session closed for user root Apr 30 00:14:14.436301 sshd[1784]: Connection closed by 10.0.0.1 port 34776 Apr 30 00:14:14.437106 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Apr 30 00:14:14.461433 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:34788.service - OpenSSH per-connection server daemon (10.0.0.1:34788). Apr 30 00:14:14.462792 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:34776.service: Deactivated successfully. Apr 30 00:14:14.469722 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:14:14.469905 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:14:14.487391 systemd-logind[1582]: Removed session 8. Apr 30 00:14:14.557150 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 34788 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:14:14.560497 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:14:14.577527 systemd-logind[1582]: New session 9 of user core. Apr 30 00:14:14.593449 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:14:14.663728 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:14:14.664277 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:14:15.310132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:14:15.328812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:15.696168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:15.704865 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:14:15.871385 kubelet[1847]: E0430 00:14:15.867614 1847 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:14:15.880758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:14:15.881188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:14:16.072465 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:14:16.078861 (dockerd)[1866]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:14:18.207176 dockerd[1866]: time="2025-04-30T00:14:18.207022241Z" level=info msg="Starting up" Apr 30 00:14:19.074580 dockerd[1866]: time="2025-04-30T00:14:19.073851195Z" level=info msg="Loading containers: start." Apr 30 00:14:19.714041 kernel: Initializing XFRM netlink socket Apr 30 00:14:20.022054 systemd-networkd[1264]: docker0: Link UP Apr 30 00:14:20.161112 dockerd[1866]: time="2025-04-30T00:14:20.156863093Z" level=info msg="Loading containers: done." Apr 30 00:14:20.466155 dockerd[1866]: time="2025-04-30T00:14:20.465985807Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:14:20.466351 dockerd[1866]: time="2025-04-30T00:14:20.466173544Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:14:20.466466 dockerd[1866]: time="2025-04-30T00:14:20.466442811Z" level=info msg="Daemon has completed initialization" Apr 30 00:14:20.542427 dockerd[1866]: time="2025-04-30T00:14:20.541776998Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:14:20.541977 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:14:22.678826 containerd[1596]: time="2025-04-30T00:14:22.678768809Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:14:24.367200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222043440.mount: Deactivated successfully. Apr 30 00:14:25.973599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:14:25.987178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:26.162159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:26.167598 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:14:26.466779 kubelet[2113]: E0430 00:14:26.466557 2113 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:14:26.472177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:14:26.472600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:14:28.499983 containerd[1596]: time="2025-04-30T00:14:28.499870391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:28.566806 containerd[1596]: time="2025-04-30T00:14:28.566716740Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 00:14:28.633788 containerd[1596]: time="2025-04-30T00:14:28.633697836Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:28.670406 containerd[1596]: time="2025-04-30T00:14:28.670336628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:28.671738 containerd[1596]: time="2025-04-30T00:14:28.671683617Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 5.992852886s" Apr 30 00:14:28.671738 containerd[1596]: time="2025-04-30T00:14:28.671724516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 00:14:28.698029 containerd[1596]: time="2025-04-30T00:14:28.697958358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:14:32.495363 containerd[1596]: time="2025-04-30T00:14:32.495277361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:32.501152 containerd[1596]: time="2025-04-30T00:14:32.501071075Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 00:14:32.502866 containerd[1596]: time="2025-04-30T00:14:32.502829498Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:32.506681 containerd[1596]: time="2025-04-30T00:14:32.506601446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:32.508047 containerd[1596]: time="2025-04-30T00:14:32.508002016Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.809985612s" Apr 30 00:14:32.508047 containerd[1596]: time="2025-04-30T00:14:32.508044708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 00:14:32.548991 containerd[1596]: time="2025-04-30T00:14:32.548698926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:14:34.761911 containerd[1596]: time="2025-04-30T00:14:34.761757782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:34.764937 containerd[1596]: time="2025-04-30T00:14:34.764125679Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 00:14:34.829024 containerd[1596]: time="2025-04-30T00:14:34.828927151Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:34.866920 containerd[1596]: time="2025-04-30T00:14:34.866807809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:34.869288 containerd[1596]: time="2025-04-30T00:14:34.869204668Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.320442148s" Apr 30 00:14:34.869288 containerd[1596]: time="2025-04-30T00:14:34.869275852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 00:14:34.898654 containerd[1596]: time="2025-04-30T00:14:34.898338414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:14:36.487759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:14:36.501243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:36.672630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:36.678585 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:14:36.745808 kubelet[2187]: E0430 00:14:36.745582 2187 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:14:36.750303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:14:36.750710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:14:38.246652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338651155.mount: Deactivated successfully. Apr 30 00:14:41.310074 containerd[1596]: time="2025-04-30T00:14:41.309985851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:41.422038 containerd[1596]: time="2025-04-30T00:14:41.421913382Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 00:14:41.492041 containerd[1596]: time="2025-04-30T00:14:41.491952124Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:41.536857 containerd[1596]: time="2025-04-30T00:14:41.536741878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:41.537635 containerd[1596]: time="2025-04-30T00:14:41.537563078Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 6.639157961s" Apr 30 00:14:41.537635 containerd[1596]: time="2025-04-30T00:14:41.537623933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 00:14:41.565039 containerd[1596]: time="2025-04-30T00:14:41.564869975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:14:42.165967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128976910.mount: Deactivated successfully. Apr 30 00:14:43.324599 containerd[1596]: time="2025-04-30T00:14:43.324525849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:43.326039 containerd[1596]: time="2025-04-30T00:14:43.325986068Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 00:14:43.327514 containerd[1596]: time="2025-04-30T00:14:43.327478377Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:43.331045 containerd[1596]: time="2025-04-30T00:14:43.331004266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:43.332048 containerd[1596]: time="2025-04-30T00:14:43.332010180Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.767073225s" Apr 30 00:14:43.332048 containerd[1596]: time="2025-04-30T00:14:43.332049127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 00:14:43.354687 containerd[1596]: time="2025-04-30T00:14:43.354426654Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:14:43.847736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74844576.mount: Deactivated successfully. Apr 30 00:14:43.854860 containerd[1596]: time="2025-04-30T00:14:43.854814374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:43.855614 containerd[1596]: time="2025-04-30T00:14:43.855569966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 00:14:43.856786 containerd[1596]: time="2025-04-30T00:14:43.856753039Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:43.859531 containerd[1596]: time="2025-04-30T00:14:43.859494103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:43.860239 containerd[1596]: time="2025-04-30T00:14:43.860209757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 505.744115ms" Apr 30 00:14:43.860292 containerd[1596]: time="2025-04-30T00:14:43.860236845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 00:14:43.882934 containerd[1596]: time="2025-04-30T00:14:43.882690230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:14:44.483405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122826181.mount: Deactivated successfully. Apr 30 00:14:46.345833 update_engine[1587]: I20250430 00:14:46.345672 1587 update_attempter.cc:509] Updating boot flags... Apr 30 00:14:46.557959 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2323) Apr 30 00:14:46.601914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2323) Apr 30 00:14:46.888750 containerd[1596]: time="2025-04-30T00:14:46.888650387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:46.889465 containerd[1596]: time="2025-04-30T00:14:46.889393665Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 00:14:46.890607 containerd[1596]: time="2025-04-30T00:14:46.890559726Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:46.893389 containerd[1596]: time="2025-04-30T00:14:46.893344370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:14:46.894660 containerd[1596]: time="2025-04-30T00:14:46.894616103Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.011887348s" Apr 30 00:14:46.894660 containerd[1596]: time="2025-04-30T00:14:46.894652901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 00:14:46.930123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:14:46.941105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:47.088369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:47.094378 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:14:47.464410 kubelet[2353]: E0430 00:14:47.464365 2353 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:14:47.469189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:14:47.469810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:14:49.614568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:49.629150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:49.647212 systemd[1]: Reloading requested from client PID 2437 ('systemctl') (unit session-9.scope)... Apr 30 00:14:49.647229 systemd[1]: Reloading... Apr 30 00:14:49.741913 zram_generator::config[2476]: No configuration found. Apr 30 00:14:50.037743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:14:50.118225 systemd[1]: Reloading finished in 470 ms. Apr 30 00:14:50.171017 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:14:50.171168 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:14:50.171670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:50.174323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:50.333300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:50.339276 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:14:50.381216 kubelet[2536]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:14:50.381216 kubelet[2536]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:14:50.381216 kubelet[2536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:14:50.381661 kubelet[2536]: I0430 00:14:50.381255 2536 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:14:51.375411 kubelet[2536]: I0430 00:14:51.375354 2536 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:14:51.375411 kubelet[2536]: I0430 00:14:51.375390 2536 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:14:51.375639 kubelet[2536]: I0430 00:14:51.375613 2536 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:14:51.389786 kubelet[2536]: I0430 00:14:51.389722 2536 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:14:51.390415 kubelet[2536]: E0430 00:14:51.390226 2536 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.408986 kubelet[2536]: I0430 00:14:51.408936 2536 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:14:51.409448 kubelet[2536]: I0430 00:14:51.409403 2536 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:14:51.409639 kubelet[2536]: I0430 00:14:51.409442 2536 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:14:51.409727 kubelet[2536]: I0430 00:14:51.409656 2536 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:14:51.409727 kubelet[2536]: I0430 00:14:51.409667 2536 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:14:51.409882 kubelet[2536]: I0430 00:14:51.409854 2536 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:14:51.410767 kubelet[2536]: I0430 00:14:51.410741 2536 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:14:51.410800 kubelet[2536]: I0430 00:14:51.410781 2536 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:14:51.410820 kubelet[2536]: I0430 00:14:51.410817 2536 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:14:51.410843 kubelet[2536]: I0430 00:14:51.410837 2536 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:14:51.413672 kubelet[2536]: W0430 00:14:51.413576 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.413672 kubelet[2536]: E0430 00:14:51.413652 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.413672 kubelet[2536]: W0430 00:14:51.413627 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.413758 kubelet[2536]: E0430 00:14:51.413684 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.415744 kubelet[2536]: I0430 00:14:51.415725 2536 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:14:51.417302 kubelet[2536]: I0430 00:14:51.417264 2536 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:14:51.417373 kubelet[2536]: W0430 00:14:51.417343 2536 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:14:51.418051 kubelet[2536]: I0430 00:14:51.418028 2536 server.go:1264] "Started kubelet" Apr 30 00:14:51.419936 kubelet[2536]: I0430 00:14:51.418156 2536 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:14:51.419936 kubelet[2536]: I0430 00:14:51.418397 2536 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:14:51.419936 kubelet[2536]: I0430 00:14:51.418717 2536 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:14:51.419936 kubelet[2536]: I0430 00:14:51.419420 2536 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:14:51.421739 kubelet[2536]: I0430 00:14:51.420735 2536 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:14:51.421739 kubelet[2536]: I0430 00:14:51.420982 2536 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:14:51.421739 kubelet[2536]: I0430 00:14:51.421110 2536 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:14:51.421739 kubelet[2536]: I0430 00:14:51.421171 2536 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:14:51.422162 kubelet[2536]: E0430 00:14:51.422111 2536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Apr 30 00:14:51.422267 kubelet[2536]: W0430 00:14:51.422218 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.422318 kubelet[2536]: E0430 00:14:51.422293 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.424298 kubelet[2536]: I0430 00:14:51.424271 2536 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:14:51.424382 kubelet[2536]: I0430 00:14:51.424368 2536 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:14:51.425352 kubelet[2536]: E0430 00:14:51.425206 2536 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:14:51.425763 kubelet[2536]: I0430 00:14:51.425744 2536 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:14:51.427110 kubelet[2536]: E0430 00:14:51.426989 2536 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183af05dd8b7280c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:14:51.417995276 +0000 UTC m=+1.074483625,LastTimestamp:2025-04-30 00:14:51.417995276 +0000 UTC m=+1.074483625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:14:51.445407 kubelet[2536]: I0430 00:14:51.445344 2536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:14:51.449913 kubelet[2536]: I0430 00:14:51.447073 2536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:14:51.449913 kubelet[2536]: I0430 00:14:51.447113 2536 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:14:51.449913 kubelet[2536]: I0430 00:14:51.447144 2536 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:14:51.449913 kubelet[2536]: E0430 00:14:51.447209 2536 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:14:51.450378 kubelet[2536]: W0430 00:14:51.450326 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.450424 kubelet[2536]: E0430 00:14:51.450393 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:51.455691 kubelet[2536]: I0430 00:14:51.455670 2536 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:14:51.455691 kubelet[2536]: I0430 00:14:51.455688 2536 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:14:51.455780 kubelet[2536]: I0430 00:14:51.455707 2536 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:14:51.523324 kubelet[2536]: I0430 00:14:51.523277 2536 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:51.523708 kubelet[2536]: E0430 00:14:51.523671 2536 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Apr 30 00:14:51.548035 kubelet[2536]: E0430 00:14:51.547989 2536 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:14:51.622901 kubelet[2536]: E0430 00:14:51.622826 2536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Apr 30 00:14:51.725793 kubelet[2536]: I0430 00:14:51.725762 2536 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:51.726166 kubelet[2536]: E0430 00:14:51.726132 2536 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Apr 30 00:14:51.748331 kubelet[2536]: E0430 00:14:51.748277 2536 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:14:51.904378 kubelet[2536]: I0430 00:14:51.904317 2536 policy_none.go:49] "None policy: Start" Apr 30 00:14:51.905436 kubelet[2536]: I0430 00:14:51.905392 2536 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:14:51.905526 kubelet[2536]: I0430 00:14:51.905465 2536 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:14:51.917380 kubelet[2536]: I0430 00:14:51.917343 2536 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:14:51.917630 kubelet[2536]: I0430 00:14:51.917589 2536 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:14:51.917750 kubelet[2536]: I0430 00:14:51.917729 2536 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:14:51.919483 kubelet[2536]: E0430 00:14:51.919451 2536 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:14:52.024122 kubelet[2536]: E0430 00:14:52.023941 2536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Apr 30 00:14:52.128106 kubelet[2536]: I0430 00:14:52.128047 2536 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:52.128629 kubelet[2536]: E0430 00:14:52.128597 2536 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Apr 30 00:14:52.148977 kubelet[2536]: I0430 00:14:52.148867 2536 topology_manager.go:215] "Topology Admit Handler" podUID="5516de7d72d6403e7df9833437dcb6ba" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:14:52.150485 kubelet[2536]: I0430 00:14:52.150453 2536 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:14:52.151596 kubelet[2536]: I0430 00:14:52.151546 2536 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:14:52.227513 kubelet[2536]: I0430 00:14:52.227450 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:14:52.227513 kubelet[2536]: I0430 00:14:52.227515 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5516de7d72d6403e7df9833437dcb6ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5516de7d72d6403e7df9833437dcb6ba\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:52.227513 kubelet[2536]: I0430 00:14:52.227551 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5516de7d72d6403e7df9833437dcb6ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5516de7d72d6403e7df9833437dcb6ba\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:52.228281 kubelet[2536]: I0430 00:14:52.227597 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:52.228281 kubelet[2536]: I0430 00:14:52.227618 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:52.228281 kubelet[2536]: I0430 00:14:52.227645 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:52.228281 kubelet[2536]: I0430 00:14:52.227661 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5516de7d72d6403e7df9833437dcb6ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5516de7d72d6403e7df9833437dcb6ba\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:52.228281 kubelet[2536]: I0430 00:14:52.227707 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:52.228414 kubelet[2536]: I0430 00:14:52.227737 2536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:52.457563 kubelet[2536]: E0430 00:14:52.457259 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:52.458430 containerd[1596]: time="2025-04-30T00:14:52.458371408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5516de7d72d6403e7df9833437dcb6ba,Namespace:kube-system,Attempt:0,}" Apr 30 00:14:52.458985 containerd[1596]: time="2025-04-30T00:14:52.458806824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" Apr 30 00:14:52.459049 kubelet[2536]: E0430 00:14:52.458461 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:52.460228 kubelet[2536]: E0430 00:14:52.460192 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:52.460569 containerd[1596]: time="2025-04-30T00:14:52.460522569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" Apr 30 00:14:52.577299 kubelet[2536]: W0430 00:14:52.577217 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:52.577299 kubelet[2536]: E0430 00:14:52.577281 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:52.738347 kubelet[2536]: W0430 00:14:52.738248 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:52.738347 kubelet[2536]: E0430 00:14:52.738346 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:52.825229 kubelet[2536]: E0430 00:14:52.825103 2536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" Apr 30 00:14:52.899472 kubelet[2536]: W0430 00:14:52.899339 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:52.899472 kubelet[2536]: E0430 00:14:52.899478 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:52.931827 kubelet[2536]: I0430 00:14:52.931782 2536 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:52.932426 kubelet[2536]: E0430 00:14:52.932353 2536 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Apr 30 00:14:53.006327 kubelet[2536]: W0430 00:14:53.006109 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:53.006327 kubelet[2536]: E0430 00:14:53.006229 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:53.410899 kubelet[2536]: E0430 00:14:53.410754 2536 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:54.122715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585465015.mount: Deactivated successfully. Apr 30 00:14:54.134969 containerd[1596]: time="2025-04-30T00:14:54.134853896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:14:54.139914 containerd[1596]: time="2025-04-30T00:14:54.139786331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 00:14:54.141071 containerd[1596]: time="2025-04-30T00:14:54.141017579Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:14:54.143461 containerd[1596]: time="2025-04-30T00:14:54.143396657Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:14:54.144442 containerd[1596]: time="2025-04-30T00:14:54.144359686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:14:54.146122 containerd[1596]: time="2025-04-30T00:14:54.146086188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:14:54.147011 containerd[1596]: time="2025-04-30T00:14:54.146968477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:14:54.148029 containerd[1596]: time="2025-04-30T00:14:54.147993735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:14:54.149162 containerd[1596]: time="2025-04-30T00:14:54.149132435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.690629519s" Apr 30 00:14:54.153232 containerd[1596]: time="2025-04-30T00:14:54.153169393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.694297462s" Apr 30 00:14:54.153919 containerd[1596]: time="2025-04-30T00:14:54.153857178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.693244665s" Apr 30 00:14:54.393572 containerd[1596]: time="2025-04-30T00:14:54.391073533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:14:54.393572 containerd[1596]: time="2025-04-30T00:14:54.392842080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:14:54.393572 containerd[1596]: time="2025-04-30T00:14:54.392859097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:14:54.393572 containerd[1596]: time="2025-04-30T00:14:54.392978224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:14:54.394986 containerd[1596]: time="2025-04-30T00:14:54.394273193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:14:54.394986 containerd[1596]: time="2025-04-30T00:14:54.394935881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:14:54.394986 containerd[1596]: time="2025-04-30T00:14:54.394948380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:14:54.395185 containerd[1596]: time="2025-04-30T00:14:54.395027116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:14:54.397041 containerd[1596]: time="2025-04-30T00:14:54.396733031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:14:54.397041 containerd[1596]: time="2025-04-30T00:14:54.396780818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:14:54.397041 containerd[1596]: time="2025-04-30T00:14:54.396809042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:14:54.397041 containerd[1596]: time="2025-04-30T00:14:54.397005138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:14:54.425980 kubelet[2536]: E0430 00:14:54.425925 2536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="3.2s" Apr 30 00:14:54.459967 containerd[1596]: time="2025-04-30T00:14:54.459919389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0594075806472fdc69e17506281f114da5614bb4f1af83f0b639b94d9b402bc\"" Apr 30 00:14:54.460900 kubelet[2536]: E0430 00:14:54.460850 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:54.467620 containerd[1596]: time="2025-04-30T00:14:54.467570890Z" level=info msg="CreateContainer within sandbox \"e0594075806472fdc69e17506281f114da5614bb4f1af83f0b639b94d9b402bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:14:54.467757 containerd[1596]: time="2025-04-30T00:14:54.467680355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cd53621ebc312bf151b84f98112083f5e61dca9f749eb95e79a87eed759909a\"" Apr 30 00:14:54.469130 kubelet[2536]: E0430 00:14:54.469098 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:54.470846 containerd[1596]: time="2025-04-30T00:14:54.470804035Z" level=info msg="CreateContainer within sandbox \"9cd53621ebc312bf151b84f98112083f5e61dca9f749eb95e79a87eed759909a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:14:54.476842 containerd[1596]: time="2025-04-30T00:14:54.476800394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5516de7d72d6403e7df9833437dcb6ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d97db3250428caac2c475914a0e9ca379022539ac1d1b1a500299ef1469175fb\"" Apr 30 00:14:54.478037 kubelet[2536]: E0430 00:14:54.478002 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:54.480126 containerd[1596]: time="2025-04-30T00:14:54.480094986Z" level=info msg="CreateContainer within sandbox \"d97db3250428caac2c475914a0e9ca379022539ac1d1b1a500299ef1469175fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:14:54.507960 kubelet[2536]: W0430 00:14:54.507836 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:54.507960 kubelet[2536]: E0430 00:14:54.507936 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:54.525142 containerd[1596]: time="2025-04-30T00:14:54.525104294Z" level=info msg="CreateContainer within sandbox \"e0594075806472fdc69e17506281f114da5614bb4f1af83f0b639b94d9b402bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd2e86614cf20097fb30d7d866629e3777e7f905c07f255e655f3c29af8fe07d\"" Apr 30 00:14:54.525666 containerd[1596]: time="2025-04-30T00:14:54.525640761Z" level=info msg="StartContainer for \"fd2e86614cf20097fb30d7d866629e3777e7f905c07f255e655f3c29af8fe07d\"" Apr 30 00:14:54.534472 kubelet[2536]: I0430 00:14:54.534434 2536 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:54.534969 kubelet[2536]: E0430 00:14:54.534929 2536 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Apr 30 00:14:54.538607 containerd[1596]: time="2025-04-30T00:14:54.538564107Z" level=info msg="CreateContainer within sandbox \"d97db3250428caac2c475914a0e9ca379022539ac1d1b1a500299ef1469175fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18073635250c46f37d2b251adbebd71647123abaf21b4d8487449eeb5eb1b454\"" Apr 30 00:14:54.538994 containerd[1596]: time="2025-04-30T00:14:54.538964941Z" level=info msg="StartContainer for \"18073635250c46f37d2b251adbebd71647123abaf21b4d8487449eeb5eb1b454\"" Apr 30 00:14:54.541072 containerd[1596]: time="2025-04-30T00:14:54.541040703Z" level=info msg="CreateContainer within sandbox \"9cd53621ebc312bf151b84f98112083f5e61dca9f749eb95e79a87eed759909a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b11191f9bb89cd3e1083cedd6c896787e54492ee9d9db2d3222b617440c6a42\"" Apr 30 00:14:54.543120 containerd[1596]: time="2025-04-30T00:14:54.543083381Z" level=info msg="StartContainer for \"4b11191f9bb89cd3e1083cedd6c896787e54492ee9d9db2d3222b617440c6a42\"" Apr 30 00:14:54.628648 containerd[1596]: time="2025-04-30T00:14:54.628529474Z" level=info msg="StartContainer for \"fd2e86614cf20097fb30d7d866629e3777e7f905c07f255e655f3c29af8fe07d\" returns successfully" Apr 30 00:14:54.643934 containerd[1596]: time="2025-04-30T00:14:54.643783791Z" level=info msg="StartContainer for \"4b11191f9bb89cd3e1083cedd6c896787e54492ee9d9db2d3222b617440c6a42\" returns successfully" Apr 30 00:14:54.643934 containerd[1596]: time="2025-04-30T00:14:54.643905002Z" level=info msg="StartContainer for \"18073635250c46f37d2b251adbebd71647123abaf21b4d8487449eeb5eb1b454\" returns successfully" Apr 30 00:14:54.653641 kubelet[2536]: W0430 00:14:54.652685 2536 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:54.653641 kubelet[2536]: E0430 00:14:54.652764 2536 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Apr 30 00:14:55.464407 kubelet[2536]: E0430 00:14:55.464346 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:55.475581 kubelet[2536]: E0430 00:14:55.475528 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:55.479468 kubelet[2536]: E0430 00:14:55.479438 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:56.281350 kubelet[2536]: E0430 00:14:56.281285 2536 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 00:14:56.483317 kubelet[2536]: E0430 00:14:56.481941 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:56.705598 kubelet[2536]: E0430 00:14:56.705460 2536 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 00:14:57.158330 kubelet[2536]: E0430 00:14:57.158269 2536 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 00:14:57.632429 kubelet[2536]: E0430 00:14:57.632371 2536 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 00:14:57.736829 kubelet[2536]: I0430 00:14:57.736633 2536 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:57.799447 kubelet[2536]: I0430 00:14:57.799371 2536 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:14:57.808430 kubelet[2536]: E0430 00:14:57.808370 2536 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:14:57.909575 kubelet[2536]: E0430 00:14:57.909325 2536 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:14:58.010232 kubelet[2536]: E0430 00:14:58.010126 2536 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:14:58.110448 kubelet[2536]: E0430 00:14:58.110373 2536 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:14:58.211599 kubelet[2536]: E0430 00:14:58.211384 2536 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:14:58.326105 systemd[1]: Reloading requested from client PID 2816 ('systemctl') (unit session-9.scope)... Apr 30 00:14:58.326127 systemd[1]: Reloading... Apr 30 00:14:58.418822 kubelet[2536]: I0430 00:14:58.418774 2536 apiserver.go:52] "Watching apiserver" Apr 30 00:14:58.421386 kubelet[2536]: I0430 00:14:58.421358 2536 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:14:58.436929 zram_generator::config[2858]: No configuration found. Apr 30 00:14:58.569796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:14:58.571092 kubelet[2536]: E0430 00:14:58.571067 2536 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:58.672832 systemd[1]: Reloading finished in 346 ms. Apr 30 00:14:58.720563 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:58.748490 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:14:58.749058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:58.766271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:14:58.932060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:14:58.939133 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:14:58.993927 kubelet[2910]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:14:58.993927 kubelet[2910]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:14:58.993927 kubelet[2910]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:14:58.993927 kubelet[2910]: I0430 00:14:58.993380 2910 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:14:59.000180 kubelet[2910]: I0430 00:14:59.000137 2910 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:14:59.000180 kubelet[2910]: I0430 00:14:59.000166 2910 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:14:59.000402 kubelet[2910]: I0430 00:14:59.000378 2910 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:14:59.002585 kubelet[2910]: I0430 00:14:59.002539 2910 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:14:59.004717 kubelet[2910]: I0430 00:14:59.004670 2910 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:14:59.013575 kubelet[2910]: I0430 00:14:59.013534 2910 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:14:59.014297 kubelet[2910]: I0430 00:14:59.014238 2910 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:14:59.014518 kubelet[2910]: I0430 00:14:59.014328 2910 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:14:59.014627 kubelet[2910]: I0430 00:14:59.014536 2910 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:14:59.014627 kubelet[2910]: I0430 00:14:59.014547 2910 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:14:59.014627 kubelet[2910]: I0430 00:14:59.014608 2910 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:14:59.014752 kubelet[2910]: I0430 00:14:59.014739 2910 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:14:59.014784 kubelet[2910]: I0430 00:14:59.014763 2910 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:14:59.014813 kubelet[2910]: I0430 00:14:59.014788 2910 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:14:59.014813 kubelet[2910]: I0430 00:14:59.014811 2910 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:14:59.017924 kubelet[2910]: I0430 00:14:59.017396 2910 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:14:59.017924 kubelet[2910]: I0430 00:14:59.017816 2910 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:14:59.019788 kubelet[2910]: I0430 00:14:59.018525 2910 server.go:1264] "Started kubelet" Apr 30 00:14:59.019788 kubelet[2910]: I0430 00:14:59.019183 2910 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:14:59.020325 kubelet[2910]: I0430 00:14:59.020286 2910 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:14:59.026848 kubelet[2910]: I0430 00:14:59.026791 2910 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:14:59.028096 kubelet[2910]: I0430 00:14:59.028063 2910 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:14:59.028424 kubelet[2910]: I0430 00:14:59.028399 2910 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:14:59.030489 kubelet[2910]: I0430 00:14:59.029692 2910 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:14:59.031925 kubelet[2910]: I0430 00:14:59.031507 2910 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:14:59.031925 kubelet[2910]: I0430 00:14:59.031780 2910 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:14:59.033022 kubelet[2910]: I0430 00:14:59.032986 2910 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:14:59.033182 kubelet[2910]: I0430 00:14:59.033105 2910 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:14:59.038839 kubelet[2910]: I0430 00:14:59.038219 2910 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:14:59.052319 kubelet[2910]: E0430 00:14:59.052264 2910 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:14:59.054000 kubelet[2910]: I0430 00:14:59.053343 2910 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:14:59.061584 kubelet[2910]: I0430 00:14:59.061541 2910 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:14:59.061696 kubelet[2910]: I0430 00:14:59.061628 2910 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:14:59.062042 kubelet[2910]: I0430 00:14:59.061951 2910 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:14:59.062126 kubelet[2910]: E0430 00:14:59.062024 2910 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:14:59.127700 kubelet[2910]: I0430 00:14:59.127563 2910 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:14:59.127700 kubelet[2910]: I0430 00:14:59.127589 2910 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:14:59.127700 kubelet[2910]: I0430 00:14:59.127612 2910 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:14:59.127964 kubelet[2910]: I0430 00:14:59.127804 2910 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:14:59.127964 kubelet[2910]: I0430 00:14:59.127819 2910 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:14:59.127964 kubelet[2910]: I0430 00:14:59.127845 2910 policy_none.go:49] "None policy: Start" Apr 30 00:14:59.128709 kubelet[2910]: I0430 00:14:59.128683 2910 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:14:59.128769 kubelet[2910]: I0430 00:14:59.128716 2910 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:14:59.128907 kubelet[2910]: I0430 00:14:59.128875 2910 state_mem.go:75] "Updated machine memory state" Apr 30 00:14:59.130554 kubelet[2910]: I0430 00:14:59.130521 2910 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:14:59.132443 kubelet[2910]: I0430 00:14:59.130745 2910 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:14:59.132443 kubelet[2910]: I0430 00:14:59.130873 2910 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:14:59.134150 kubelet[2910]: I0430 00:14:59.133786 2910 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:14:59.146009 kubelet[2910]: I0430 00:14:59.144771 2910 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 30 00:14:59.146009 kubelet[2910]: I0430 00:14:59.144941 2910 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:14:59.162353 kubelet[2910]: I0430 00:14:59.162280 2910 topology_manager.go:215] "Topology Admit Handler" podUID="5516de7d72d6403e7df9833437dcb6ba" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:14:59.162497 kubelet[2910]: I0430 00:14:59.162395 2910 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:14:59.162497 kubelet[2910]: I0430 00:14:59.162475 2910 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:14:59.170957 kubelet[2910]: E0430 00:14:59.170861 2910 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:59.329752 kubelet[2910]: I0430 00:14:59.329694 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5516de7d72d6403e7df9833437dcb6ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5516de7d72d6403e7df9833437dcb6ba\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:59.329752 kubelet[2910]: I0430 00:14:59.329744 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5516de7d72d6403e7df9833437dcb6ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5516de7d72d6403e7df9833437dcb6ba\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:59.329752 kubelet[2910]: I0430 00:14:59.329775 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5516de7d72d6403e7df9833437dcb6ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5516de7d72d6403e7df9833437dcb6ba\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:14:59.330032 kubelet[2910]: I0430 00:14:59.329842 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:59.330032 kubelet[2910]: I0430 00:14:59.329933 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:59.330032 kubelet[2910]: I0430 00:14:59.329955 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:59.330032 kubelet[2910]: I0430 00:14:59.329972 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:59.330032 kubelet[2910]: I0430 00:14:59.329990 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:14:59.330253 kubelet[2910]: I0430 00:14:59.330015 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:14:59.471113 kubelet[2910]: E0430 00:14:59.471072 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:59.471113 kubelet[2910]: E0430 00:14:59.471125 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:14:59.471742 kubelet[2910]: E0430 00:14:59.471710 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:00.017606 kubelet[2910]: I0430 00:15:00.016301 2910 apiserver.go:52] "Watching apiserver" Apr 30 00:15:00.028420 kubelet[2910]: I0430 00:15:00.028337 2910 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:15:00.081868 kubelet[2910]: I0430 00:15:00.081642 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.081617532 podStartE2EDuration="2.081617532s" podCreationTimestamp="2025-04-30 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:15:00.072244539 +0000 UTC m=+1.127859059" watchObservedRunningTime="2025-04-30 00:15:00.081617532 +0000 UTC m=+1.137232052" Apr 30 00:15:00.090974 kubelet[2910]: E0430 00:15:00.090926 2910 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:15:00.091962 kubelet[2910]: E0430 00:15:00.091577 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:00.093104 kubelet[2910]: I0430 00:15:00.093051 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.093034649 podStartE2EDuration="1.093034649s" podCreationTimestamp="2025-04-30 00:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:15:00.082002943 +0000 UTC m=+1.137617463" watchObservedRunningTime="2025-04-30 00:15:00.093034649 +0000 UTC m=+1.148649169" Apr 30 00:15:00.093190 kubelet[2910]: E0430 00:15:00.093124 2910 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 30 00:15:00.093300 kubelet[2910]: E0430 00:15:00.093279 2910 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:15:00.093520 kubelet[2910]: E0430 00:15:00.093492 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:00.093678 kubelet[2910]: E0430 00:15:00.093664 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:00.117915 kubelet[2910]: I0430 00:15:00.115082 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.115057997 podStartE2EDuration="1.115057997s" podCreationTimestamp="2025-04-30 00:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:15:00.09355952 +0000 UTC m=+1.149174040" watchObservedRunningTime="2025-04-30 00:15:00.115057997 +0000 UTC m=+1.170672507" Apr 30 00:15:01.091914 kubelet[2910]: E0430 00:15:01.089471 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:01.097714 kubelet[2910]: E0430 00:15:01.096430 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:01.097714 kubelet[2910]: E0430 00:15:01.096570 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:02.091328 kubelet[2910]: E0430 00:15:02.091274 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:04.466032 sudo[1821]: pam_unix(sudo:session): session closed for user root Apr 30 00:15:04.467781 sshd[1820]: Connection closed by 10.0.0.1 port 34788 Apr 30 00:15:04.469050 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:04.474596 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:34788.service: Deactivated successfully. Apr 30 00:15:04.477994 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:15:04.478861 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:15:04.480321 systemd-logind[1582]: Removed session 9. Apr 30 00:15:08.273729 kubelet[2910]: E0430 00:15:08.273661 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:09.103172 kubelet[2910]: E0430 00:15:09.102960 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:09.745726 kubelet[2910]: E0430 00:15:09.745665 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:11.929710 kubelet[2910]: E0430 00:15:11.929648 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:12.739448 kubelet[2910]: I0430 00:15:12.739397 2910 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:15:12.740169 containerd[1596]: time="2025-04-30T00:15:12.740006072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:15:12.740746 kubelet[2910]: I0430 00:15:12.740281 2910 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:15:13.498916 kubelet[2910]: I0430 00:15:13.496331 2910 topology_manager.go:215] "Topology Admit Handler" podUID="7cc2ef4a-174d-4d4b-94f7-41187101b6af" podNamespace="kube-system" podName="kube-proxy-wvqtk" Apr 30 00:15:13.510518 kubelet[2910]: I0430 00:15:13.509685 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cc2ef4a-174d-4d4b-94f7-41187101b6af-xtables-lock\") pod \"kube-proxy-wvqtk\" (UID: \"7cc2ef4a-174d-4d4b-94f7-41187101b6af\") " pod="kube-system/kube-proxy-wvqtk" Apr 30 00:15:13.510863 kubelet[2910]: I0430 00:15:13.510763 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54rdp\" (UniqueName: \"kubernetes.io/projected/7cc2ef4a-174d-4d4b-94f7-41187101b6af-kube-api-access-54rdp\") pod \"kube-proxy-wvqtk\" (UID: \"7cc2ef4a-174d-4d4b-94f7-41187101b6af\") " pod="kube-system/kube-proxy-wvqtk" Apr 30 00:15:13.511842 kubelet[2910]: I0430 00:15:13.511798 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cc2ef4a-174d-4d4b-94f7-41187101b6af-kube-proxy\") pod \"kube-proxy-wvqtk\" (UID: \"7cc2ef4a-174d-4d4b-94f7-41187101b6af\") " pod="kube-system/kube-proxy-wvqtk" Apr 30 00:15:13.511996 kubelet[2910]: I0430 00:15:13.511980 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cc2ef4a-174d-4d4b-94f7-41187101b6af-lib-modules\") pod \"kube-proxy-wvqtk\" (UID: \"7cc2ef4a-174d-4d4b-94f7-41187101b6af\") " pod="kube-system/kube-proxy-wvqtk" Apr 30 00:15:13.549730 kubelet[2910]: I0430 00:15:13.549662 2910 topology_manager.go:215] "Topology Admit Handler" podUID="63303f85-7fd4-4f0c-857a-7b787d3aca1e" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-nd8dm" Apr 30 00:15:13.613134 kubelet[2910]: I0430 00:15:13.613044 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hlt\" (UniqueName: \"kubernetes.io/projected/63303f85-7fd4-4f0c-857a-7b787d3aca1e-kube-api-access-q2hlt\") pod \"tigera-operator-797db67f8-nd8dm\" (UID: \"63303f85-7fd4-4f0c-857a-7b787d3aca1e\") " pod="tigera-operator/tigera-operator-797db67f8-nd8dm" Apr 30 00:15:13.613134 kubelet[2910]: I0430 00:15:13.613141 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63303f85-7fd4-4f0c-857a-7b787d3aca1e-var-lib-calico\") pod \"tigera-operator-797db67f8-nd8dm\" (UID: \"63303f85-7fd4-4f0c-857a-7b787d3aca1e\") " pod="tigera-operator/tigera-operator-797db67f8-nd8dm" Apr 30 00:15:13.801792 kubelet[2910]: E0430 00:15:13.801650 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:13.802123 containerd[1596]: time="2025-04-30T00:15:13.802070784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wvqtk,Uid:7cc2ef4a-174d-4d4b-94f7-41187101b6af,Namespace:kube-system,Attempt:0,}" Apr 30 00:15:13.867575 containerd[1596]: time="2025-04-30T00:15:13.867513735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-nd8dm,Uid:63303f85-7fd4-4f0c-857a-7b787d3aca1e,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:15:14.611145 containerd[1596]: time="2025-04-30T00:15:14.611011378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:15:14.611145 containerd[1596]: time="2025-04-30T00:15:14.611092806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:15:14.611145 containerd[1596]: time="2025-04-30T00:15:14.611103698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:15:14.611384 containerd[1596]: time="2025-04-30T00:15:14.611229557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:15:14.621317 containerd[1596]: time="2025-04-30T00:15:14.621202719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:15:14.621317 containerd[1596]: time="2025-04-30T00:15:14.621262442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:15:14.621317 containerd[1596]: time="2025-04-30T00:15:14.621272423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:15:14.621570 containerd[1596]: time="2025-04-30T00:15:14.621511584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:15:14.660446 containerd[1596]: time="2025-04-30T00:15:14.660391788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wvqtk,Uid:7cc2ef4a-174d-4d4b-94f7-41187101b6af,Namespace:kube-system,Attempt:0,} returns sandbox id \"205a9d88100eb73cb688131245999356aacfdb881ed5159a2b9e9ce30712eeef\"" Apr 30 00:15:14.661134 kubelet[2910]: E0430 00:15:14.661112 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:14.664732 containerd[1596]: time="2025-04-30T00:15:14.664578694Z" level=info msg="CreateContainer within sandbox \"205a9d88100eb73cb688131245999356aacfdb881ed5159a2b9e9ce30712eeef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:15:14.679625 containerd[1596]: time="2025-04-30T00:15:14.679559720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-nd8dm,Uid:63303f85-7fd4-4f0c-857a-7b787d3aca1e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ce64d9cdf9a09d560e0a9cb0579178b627d16e208e5818e36fa6d16305fec143\"" Apr 30 00:15:14.682099 containerd[1596]: time="2025-04-30T00:15:14.681540085Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:15:14.703578 containerd[1596]: time="2025-04-30T00:15:14.703528541Z" level=info msg="CreateContainer within sandbox \"205a9d88100eb73cb688131245999356aacfdb881ed5159a2b9e9ce30712eeef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f9874e018e92e3702bb71194bca35a1b4ca060c8fa938d78cc0c17370c82b32\"" Apr 30 00:15:14.704325 containerd[1596]: time="2025-04-30T00:15:14.704257772Z" level=info msg="StartContainer for \"8f9874e018e92e3702bb71194bca35a1b4ca060c8fa938d78cc0c17370c82b32\"" Apr 30 00:15:14.735113 systemd[1]: run-containerd-runc-k8s.io-8f9874e018e92e3702bb71194bca35a1b4ca060c8fa938d78cc0c17370c82b32-runc.LGAhAf.mount: Deactivated successfully. Apr 30 00:15:14.786586 containerd[1596]: time="2025-04-30T00:15:14.786510954Z" level=info msg="StartContainer for \"8f9874e018e92e3702bb71194bca35a1b4ca060c8fa938d78cc0c17370c82b32\" returns successfully" Apr 30 00:15:15.113824 kubelet[2910]: E0430 00:15:15.113678 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:15.125681 kubelet[2910]: I0430 00:15:15.125336 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wvqtk" podStartSLOduration=2.125312439 podStartE2EDuration="2.125312439s" podCreationTimestamp="2025-04-30 00:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:15:15.125206622 +0000 UTC m=+16.180821142" watchObservedRunningTime="2025-04-30 00:15:15.125312439 +0000 UTC m=+16.180926959" Apr 30 00:15:22.587475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2741883145.mount: Deactivated successfully. Apr 30 00:15:25.219590 containerd[1596]: time="2025-04-30T00:15:25.219520939Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:25.220366 containerd[1596]: time="2025-04-30T00:15:25.220309606Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 00:15:25.223466 containerd[1596]: time="2025-04-30T00:15:25.223420954Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:25.225697 containerd[1596]: time="2025-04-30T00:15:25.225647420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:25.226284 containerd[1596]: time="2025-04-30T00:15:25.226244930Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 10.544662868s" Apr 30 00:15:25.226284 containerd[1596]: time="2025-04-30T00:15:25.226274360Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 00:15:25.228588 containerd[1596]: time="2025-04-30T00:15:25.228557291Z" level=info msg="CreateContainer within sandbox \"ce64d9cdf9a09d560e0a9cb0579178b627d16e208e5818e36fa6d16305fec143\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:15:25.244329 containerd[1596]: time="2025-04-30T00:15:25.244278474Z" level=info msg="CreateContainer within sandbox \"ce64d9cdf9a09d560e0a9cb0579178b627d16e208e5818e36fa6d16305fec143\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"563a20bc1738131fd8c4ed1b7060e95ef4615cc270693a3e63f7b88164d6e4a8\"" Apr 30 00:15:25.246265 containerd[1596]: time="2025-04-30T00:15:25.244932799Z" level=info msg="StartContainer for \"563a20bc1738131fd8c4ed1b7060e95ef4615cc270693a3e63f7b88164d6e4a8\"" Apr 30 00:15:25.307537 containerd[1596]: time="2025-04-30T00:15:25.307468745Z" level=info msg="StartContainer for \"563a20bc1738131fd8c4ed1b7060e95ef4615cc270693a3e63f7b88164d6e4a8\" returns successfully" Apr 30 00:15:26.142171 kubelet[2910]: I0430 00:15:26.142091 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-nd8dm" podStartSLOduration=2.595886567 podStartE2EDuration="13.142049884s" podCreationTimestamp="2025-04-30 00:15:13 +0000 UTC" firstStartedPulling="2025-04-30 00:15:14.680953097 +0000 UTC m=+15.736567617" lastFinishedPulling="2025-04-30 00:15:25.227116414 +0000 UTC m=+26.282730934" observedRunningTime="2025-04-30 00:15:26.14196866 +0000 UTC m=+27.197583180" watchObservedRunningTime="2025-04-30 00:15:26.142049884 +0000 UTC m=+27.197664404" Apr 30 00:15:27.424212 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:47474.service - OpenSSH per-connection server daemon (10.0.0.1:47474). Apr 30 00:15:27.481866 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 47474 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:27.483451 sshd-session[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:27.490976 systemd-logind[1582]: New session 10 of user core. Apr 30 00:15:27.504226 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:15:27.675124 sshd[3289]: Connection closed by 10.0.0.1 port 47474 Apr 30 00:15:27.674378 sshd-session[3286]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:27.678526 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:47474.service: Deactivated successfully. Apr 30 00:15:27.682708 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:15:27.683616 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:15:27.685037 systemd-logind[1582]: Removed session 10. Apr 30 00:15:28.529929 kubelet[2910]: I0430 00:15:28.527457 2910 topology_manager.go:215] "Topology Admit Handler" podUID="23f80d69-8867-4e3d-8b9b-dc5b0859d3a8" podNamespace="calico-system" podName="calico-node-gbkxd" Apr 30 00:15:28.706916 kubelet[2910]: I0430 00:15:28.706843 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-cni-net-dir\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707175 kubelet[2910]: I0430 00:15:28.706930 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-flexvol-driver-host\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707175 kubelet[2910]: I0430 00:15:28.707032 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-node-certs\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707175 kubelet[2910]: I0430 00:15:28.707088 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-var-lib-calico\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707175 kubelet[2910]: I0430 00:15:28.707106 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j998m\" (UniqueName: \"kubernetes.io/projected/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-kube-api-access-j998m\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707175 kubelet[2910]: I0430 00:15:28.707126 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-xtables-lock\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707301 kubelet[2910]: I0430 00:15:28.707144 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-lib-modules\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707301 kubelet[2910]: I0430 00:15:28.707160 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-policysync\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707301 kubelet[2910]: I0430 00:15:28.707174 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-tigera-ca-bundle\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707301 kubelet[2910]: I0430 00:15:28.707187 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-var-run-calico\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707301 kubelet[2910]: I0430 00:15:28.707201 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-cni-bin-dir\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.707421 kubelet[2910]: I0430 00:15:28.707215 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23f80d69-8867-4e3d-8b9b-dc5b0859d3a8-cni-log-dir\") pod \"calico-node-gbkxd\" (UID: \"23f80d69-8867-4e3d-8b9b-dc5b0859d3a8\") " pod="calico-system/calico-node-gbkxd" Apr 30 00:15:28.790015 kubelet[2910]: I0430 00:15:28.788765 2910 topology_manager.go:215] "Topology Admit Handler" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" podNamespace="calico-system" podName="csi-node-driver-4t95w" Apr 30 00:15:28.790385 kubelet[2910]: E0430 00:15:28.790354 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:28.808302 kubelet[2910]: I0430 00:15:28.808264 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60f4275e-2eec-4a29-a8cc-8e6f60dbe335-kubelet-dir\") pod \"csi-node-driver-4t95w\" (UID: \"60f4275e-2eec-4a29-a8cc-8e6f60dbe335\") " pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:28.808556 kubelet[2910]: I0430 00:15:28.808434 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/60f4275e-2eec-4a29-a8cc-8e6f60dbe335-registration-dir\") pod \"csi-node-driver-4t95w\" (UID: \"60f4275e-2eec-4a29-a8cc-8e6f60dbe335\") " pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:28.809842 kubelet[2910]: I0430 00:15:28.808601 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/60f4275e-2eec-4a29-a8cc-8e6f60dbe335-varrun\") pod \"csi-node-driver-4t95w\" (UID: \"60f4275e-2eec-4a29-a8cc-8e6f60dbe335\") " pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:28.809842 kubelet[2910]: I0430 00:15:28.808629 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2gn\" (UniqueName: \"kubernetes.io/projected/60f4275e-2eec-4a29-a8cc-8e6f60dbe335-kube-api-access-6k2gn\") pod \"csi-node-driver-4t95w\" (UID: \"60f4275e-2eec-4a29-a8cc-8e6f60dbe335\") " pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:28.809842 kubelet[2910]: I0430 00:15:28.808700 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/60f4275e-2eec-4a29-a8cc-8e6f60dbe335-socket-dir\") pod \"csi-node-driver-4t95w\" (UID: \"60f4275e-2eec-4a29-a8cc-8e6f60dbe335\") " pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:28.823795 kubelet[2910]: E0430 00:15:28.823758 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.823795 kubelet[2910]: W0430 00:15:28.823783 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.824408 kubelet[2910]: E0430 00:15:28.824355 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.827297 kubelet[2910]: E0430 00:15:28.827266 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.827511 kubelet[2910]: W0430 00:15:28.827422 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.827511 kubelet[2910]: E0430 00:15:28.827456 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.845967 kubelet[2910]: E0430 00:15:28.845922 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:28.847285 containerd[1596]: time="2025-04-30T00:15:28.846865069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbkxd,Uid:23f80d69-8867-4e3d-8b9b-dc5b0859d3a8,Namespace:calico-system,Attempt:0,}" Apr 30 00:15:28.902938 containerd[1596]: time="2025-04-30T00:15:28.902122688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:15:28.902938 containerd[1596]: time="2025-04-30T00:15:28.902252841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:15:28.902938 containerd[1596]: time="2025-04-30T00:15:28.902277921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:15:28.902938 containerd[1596]: time="2025-04-30T00:15:28.902435369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:15:28.910807 kubelet[2910]: E0430 00:15:28.909376 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.910807 kubelet[2910]: W0430 00:15:28.909403 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.910807 kubelet[2910]: E0430 00:15:28.909427 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.912812 kubelet[2910]: E0430 00:15:28.912797 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.912922 kubelet[2910]: W0430 00:15:28.912899 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.912995 kubelet[2910]: E0430 00:15:28.912983 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.914613 kubelet[2910]: E0430 00:15:28.914564 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.914675 kubelet[2910]: W0430 00:15:28.914609 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.914675 kubelet[2910]: E0430 00:15:28.914657 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.915076 kubelet[2910]: E0430 00:15:28.915048 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.915076 kubelet[2910]: W0430 00:15:28.915067 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.915297 kubelet[2910]: E0430 00:15:28.915213 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.915356 kubelet[2910]: E0430 00:15:28.915332 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.915356 kubelet[2910]: W0430 00:15:28.915354 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.915570 kubelet[2910]: E0430 00:15:28.915472 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.915650 kubelet[2910]: E0430 00:15:28.915631 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.915686 kubelet[2910]: W0430 00:15:28.915648 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.915764 kubelet[2910]: E0430 00:15:28.915743 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.916171 kubelet[2910]: E0430 00:15:28.916029 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.916171 kubelet[2910]: W0430 00:15:28.916045 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.916247 kubelet[2910]: E0430 00:15:28.916073 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.916464 kubelet[2910]: E0430 00:15:28.916441 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.916464 kubelet[2910]: W0430 00:15:28.916461 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.916536 kubelet[2910]: E0430 00:15:28.916480 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.916811 kubelet[2910]: E0430 00:15:28.916783 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.916811 kubelet[2910]: W0430 00:15:28.916802 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.916930 kubelet[2910]: E0430 00:15:28.916905 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.917483 kubelet[2910]: E0430 00:15:28.917446 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.917483 kubelet[2910]: W0430 00:15:28.917469 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.917564 kubelet[2910]: E0430 00:15:28.917553 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.917937 kubelet[2910]: E0430 00:15:28.917861 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.918632 kubelet[2910]: W0430 00:15:28.918598 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.918958 kubelet[2910]: E0430 00:15:28.918819 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.919160 kubelet[2910]: E0430 00:15:28.919131 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.919494 kubelet[2910]: W0430 00:15:28.919243 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.919723 kubelet[2910]: E0430 00:15:28.919706 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.920322 kubelet[2910]: E0430 00:15:28.920306 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.920509 kubelet[2910]: W0430 00:15:28.920416 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.920615 kubelet[2910]: E0430 00:15:28.920601 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.921117 kubelet[2910]: E0430 00:15:28.921065 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.921250 kubelet[2910]: W0430 00:15:28.921091 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.922057 kubelet[2910]: E0430 00:15:28.921406 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.922057 kubelet[2910]: E0430 00:15:28.921625 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.922057 kubelet[2910]: W0430 00:15:28.921636 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.922057 kubelet[2910]: E0430 00:15:28.921719 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.922180 kubelet[2910]: E0430 00:15:28.922066 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.922180 kubelet[2910]: W0430 00:15:28.922124 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.922242 kubelet[2910]: E0430 00:15:28.922177 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.922517 kubelet[2910]: E0430 00:15:28.922489 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.922517 kubelet[2910]: W0430 00:15:28.922506 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.922569 kubelet[2910]: E0430 00:15:28.922542 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.922831 kubelet[2910]: E0430 00:15:28.922809 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.922831 kubelet[2910]: W0430 00:15:28.922824 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.922943 kubelet[2910]: E0430 00:15:28.922909 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.923165 kubelet[2910]: E0430 00:15:28.923138 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.923165 kubelet[2910]: W0430 00:15:28.923153 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.923243 kubelet[2910]: E0430 00:15:28.923185 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.923520 kubelet[2910]: E0430 00:15:28.923477 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.923520 kubelet[2910]: W0430 00:15:28.923494 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.923520 kubelet[2910]: E0430 00:15:28.923518 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.923916 kubelet[2910]: E0430 00:15:28.923877 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.924147 kubelet[2910]: W0430 00:15:28.923974 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.924147 kubelet[2910]: E0430 00:15:28.923995 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.924565 kubelet[2910]: E0430 00:15:28.924482 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.925211 kubelet[2910]: W0430 00:15:28.924802 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.925211 kubelet[2910]: E0430 00:15:28.924825 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.925837 kubelet[2910]: E0430 00:15:28.925397 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.925837 kubelet[2910]: W0430 00:15:28.925413 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.925837 kubelet[2910]: E0430 00:15:28.925426 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.926469 kubelet[2910]: E0430 00:15:28.926420 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.926469 kubelet[2910]: W0430 00:15:28.926449 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.926469 kubelet[2910]: E0430 00:15:28.926469 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.927671 kubelet[2910]: E0430 00:15:28.927450 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.927671 kubelet[2910]: W0430 00:15:28.927521 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.927671 kubelet[2910]: E0430 00:15:28.927566 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.938396 kubelet[2910]: E0430 00:15:28.936814 2910 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:15:28.938396 kubelet[2910]: W0430 00:15:28.936843 2910 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:15:28.938396 kubelet[2910]: E0430 00:15:28.936873 2910 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:15:28.974285 containerd[1596]: time="2025-04-30T00:15:28.974233365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbkxd,Uid:23f80d69-8867-4e3d-8b9b-dc5b0859d3a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\"" Apr 30 00:15:28.975651 kubelet[2910]: E0430 00:15:28.975327 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:28.980675 containerd[1596]: time="2025-04-30T00:15:28.980640448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:15:31.062549 kubelet[2910]: E0430 00:15:31.062458 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:31.239708 containerd[1596]: time="2025-04-30T00:15:31.239608654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:31.240538 containerd[1596]: time="2025-04-30T00:15:31.240490029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 00:15:31.242036 containerd[1596]: time="2025-04-30T00:15:31.242001894Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:31.245196 containerd[1596]: time="2025-04-30T00:15:31.245142559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:31.245879 containerd[1596]: time="2025-04-30T00:15:31.245831155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.265150217s" Apr 30 00:15:31.245946 containerd[1596]: time="2025-04-30T00:15:31.245873331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 00:15:31.248271 containerd[1596]: time="2025-04-30T00:15:31.248227932Z" level=info msg="CreateContainer within sandbox \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:15:31.276579 containerd[1596]: time="2025-04-30T00:15:31.276520598Z" level=info msg="CreateContainer within sandbox \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9ee4c8432320fcbc182b971378a7e3adc71892d955284e538adc3ff6afb80098\"" Apr 30 00:15:31.277206 containerd[1596]: time="2025-04-30T00:15:31.277165948Z" level=info msg="StartContainer for \"9ee4c8432320fcbc182b971378a7e3adc71892d955284e538adc3ff6afb80098\"" Apr 30 00:15:31.363139 containerd[1596]: time="2025-04-30T00:15:31.362964050Z" level=info msg="StartContainer for \"9ee4c8432320fcbc182b971378a7e3adc71892d955284e538adc3ff6afb80098\" returns successfully" Apr 30 00:15:31.406365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee4c8432320fcbc182b971378a7e3adc71892d955284e538adc3ff6afb80098-rootfs.mount: Deactivated successfully. Apr 30 00:15:31.443428 containerd[1596]: time="2025-04-30T00:15:31.443337768Z" level=info msg="shim disconnected" id=9ee4c8432320fcbc182b971378a7e3adc71892d955284e538adc3ff6afb80098 namespace=k8s.io Apr 30 00:15:31.443428 containerd[1596]: time="2025-04-30T00:15:31.443422379Z" level=warning msg="cleaning up after shim disconnected" id=9ee4c8432320fcbc182b971378a7e3adc71892d955284e538adc3ff6afb80098 namespace=k8s.io Apr 30 00:15:31.443708 containerd[1596]: time="2025-04-30T00:15:31.443447880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:15:32.146595 kubelet[2910]: E0430 00:15:32.146553 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:32.148387 containerd[1596]: time="2025-04-30T00:15:32.147558020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:15:32.688309 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:47476.service - OpenSSH per-connection server daemon (10.0.0.1:47476). Apr 30 00:15:32.752583 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 47476 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:32.755054 sshd-session[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:32.760331 systemd-logind[1582]: New session 11 of user core. Apr 30 00:15:32.771468 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:15:32.925664 sshd[3485]: Connection closed by 10.0.0.1 port 47476 Apr 30 00:15:32.926234 sshd-session[3482]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:32.932242 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:47476.service: Deactivated successfully. Apr 30 00:15:32.935676 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:15:32.936607 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:15:32.937810 systemd-logind[1582]: Removed session 11. Apr 30 00:15:33.063171 kubelet[2910]: E0430 00:15:33.063078 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:35.063250 kubelet[2910]: E0430 00:15:35.063165 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:37.064261 kubelet[2910]: E0430 00:15:37.064193 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:37.936155 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:49510.service - OpenSSH per-connection server daemon (10.0.0.1:49510). Apr 30 00:15:37.998012 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 49510 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:37.999472 sshd-session[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:38.011031 systemd-logind[1582]: New session 12 of user core. Apr 30 00:15:38.016356 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:15:38.032477 containerd[1596]: time="2025-04-30T00:15:38.032400265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:38.039242 containerd[1596]: time="2025-04-30T00:15:38.039169391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 00:15:38.046436 containerd[1596]: time="2025-04-30T00:15:38.045582467Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:38.056079 containerd[1596]: time="2025-04-30T00:15:38.055958977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:15:38.057423 containerd[1596]: time="2025-04-30T00:15:38.057351570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.909748683s" Apr 30 00:15:38.057970 containerd[1596]: time="2025-04-30T00:15:38.057927404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 00:15:38.060988 containerd[1596]: time="2025-04-30T00:15:38.060852322Z" level=info msg="CreateContainer within sandbox \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:15:38.120457 containerd[1596]: time="2025-04-30T00:15:38.120394477Z" level=info msg="CreateContainer within sandbox \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3e1ab71c04bc290a8261a4341e96136da63099fa12d38265c7ba14d1000e7417\"" Apr 30 00:15:38.121624 containerd[1596]: time="2025-04-30T00:15:38.121590937Z" level=info msg="StartContainer for \"3e1ab71c04bc290a8261a4341e96136da63099fa12d38265c7ba14d1000e7417\"" Apr 30 00:15:38.164024 sshd[3512]: Connection closed by 10.0.0.1 port 49510 Apr 30 00:15:38.166957 sshd-session[3505]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:38.175048 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:49510.service: Deactivated successfully. Apr 30 00:15:38.182235 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:15:38.186386 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:15:38.191255 systemd-logind[1582]: Removed session 12. Apr 30 00:15:38.279298 containerd[1596]: time="2025-04-30T00:15:38.279245439Z" level=info msg="StartContainer for \"3e1ab71c04bc290a8261a4341e96136da63099fa12d38265c7ba14d1000e7417\" returns successfully" Apr 30 00:15:39.062620 kubelet[2910]: E0430 00:15:39.062529 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:39.248154 kubelet[2910]: E0430 00:15:39.248097 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:40.249956 kubelet[2910]: E0430 00:15:40.249878 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:40.634401 containerd[1596]: time="2025-04-30T00:15:40.634231854Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:15:40.662500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e1ab71c04bc290a8261a4341e96136da63099fa12d38265c7ba14d1000e7417-rootfs.mount: Deactivated successfully. Apr 30 00:15:40.673374 kubelet[2910]: I0430 00:15:40.673344 2910 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:15:40.788771 containerd[1596]: time="2025-04-30T00:15:40.788430450Z" level=info msg="shim disconnected" id=3e1ab71c04bc290a8261a4341e96136da63099fa12d38265c7ba14d1000e7417 namespace=k8s.io Apr 30 00:15:40.788771 containerd[1596]: time="2025-04-30T00:15:40.788521134Z" level=warning msg="cleaning up after shim disconnected" id=3e1ab71c04bc290a8261a4341e96136da63099fa12d38265c7ba14d1000e7417 namespace=k8s.io Apr 30 00:15:40.788771 containerd[1596]: time="2025-04-30T00:15:40.788532685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:15:40.865844 kubelet[2910]: I0430 00:15:40.865771 2910 topology_manager.go:215] "Topology Admit Handler" podUID="8d8fc62d-6f2c-4db1-b700-84ab75075a8b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:40.885543 kubelet[2910]: I0430 00:15:40.885393 2910 topology_manager.go:215] "Topology Admit Handler" podUID="ad18deb4-55a1-4a60-89a6-511214b20063" podNamespace="calico-apiserver" podName="calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:40.885543 kubelet[2910]: I0430 00:15:40.885497 2910 topology_manager.go:215] "Topology Admit Handler" podUID="3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-km5z4" Apr 30 00:15:40.885755 kubelet[2910]: I0430 00:15:40.885621 2910 topology_manager.go:215] "Topology Admit Handler" podUID="5ad90aba-5f5d-4618-814e-e0df441b2efc" podNamespace="calico-system" podName="calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:40.885755 kubelet[2910]: I0430 00:15:40.885733 2910 topology_manager.go:215] "Topology Admit Handler" podUID="c27777f5-4b50-4a4b-8544-5463d251461f" podNamespace="calico-apiserver" podName="calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:41.054370 kubelet[2910]: I0430 00:15:41.054270 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5ht\" (UniqueName: \"kubernetes.io/projected/8d8fc62d-6f2c-4db1-b700-84ab75075a8b-kube-api-access-nj5ht\") pod \"coredns-7db6d8ff4d-2nh9s\" (UID: \"8d8fc62d-6f2c-4db1-b700-84ab75075a8b\") " pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:41.054370 kubelet[2910]: I0430 00:15:41.054344 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c27777f5-4b50-4a4b-8544-5463d251461f-calico-apiserver-certs\") pod \"calico-apiserver-676f79f8bf-5c5xv\" (UID: \"c27777f5-4b50-4a4b-8544-5463d251461f\") " pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:41.054637 kubelet[2910]: I0430 00:15:41.054397 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tgs\" (UniqueName: \"kubernetes.io/projected/c27777f5-4b50-4a4b-8544-5463d251461f-kube-api-access-f4tgs\") pod \"calico-apiserver-676f79f8bf-5c5xv\" (UID: \"c27777f5-4b50-4a4b-8544-5463d251461f\") " pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:41.054637 kubelet[2910]: I0430 00:15:41.054424 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad90aba-5f5d-4618-814e-e0df441b2efc-tigera-ca-bundle\") pod \"calico-kube-controllers-6fd77dd9ff-bqkxv\" (UID: \"5ad90aba-5f5d-4618-814e-e0df441b2efc\") " pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:41.054637 kubelet[2910]: I0430 00:15:41.054454 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed-config-volume\") pod \"coredns-7db6d8ff4d-km5z4\" (UID: \"3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed\") " pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:41.054637 kubelet[2910]: I0430 00:15:41.054477 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zc9\" (UniqueName: \"kubernetes.io/projected/5ad90aba-5f5d-4618-814e-e0df441b2efc-kube-api-access-62zc9\") pod \"calico-kube-controllers-6fd77dd9ff-bqkxv\" (UID: \"5ad90aba-5f5d-4618-814e-e0df441b2efc\") " pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:41.054637 kubelet[2910]: I0430 00:15:41.054531 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad18deb4-55a1-4a60-89a6-511214b20063-calico-apiserver-certs\") pod \"calico-apiserver-676f79f8bf-gn2h7\" (UID: \"ad18deb4-55a1-4a60-89a6-511214b20063\") " pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:41.054823 kubelet[2910]: I0430 00:15:41.054552 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8fc62d-6f2c-4db1-b700-84ab75075a8b-config-volume\") pod \"coredns-7db6d8ff4d-2nh9s\" (UID: \"8d8fc62d-6f2c-4db1-b700-84ab75075a8b\") " pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:41.054823 kubelet[2910]: I0430 00:15:41.054583 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpk9d\" (UniqueName: \"kubernetes.io/projected/ad18deb4-55a1-4a60-89a6-511214b20063-kube-api-access-rpk9d\") pod \"calico-apiserver-676f79f8bf-gn2h7\" (UID: \"ad18deb4-55a1-4a60-89a6-511214b20063\") " pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:41.054823 kubelet[2910]: I0430 00:15:41.054609 2910 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swqlt\" (UniqueName: \"kubernetes.io/projected/3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed-kube-api-access-swqlt\") pod \"coredns-7db6d8ff4d-km5z4\" (UID: \"3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed\") " pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:41.074196 containerd[1596]: time="2025-04-30T00:15:41.074146885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:0,}" Apr 30 00:15:41.259220 kubelet[2910]: E0430 00:15:41.259179 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:41.259945 containerd[1596]: time="2025-04-30T00:15:41.259909724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:15:41.491231 kubelet[2910]: E0430 00:15:41.491157 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:41.493259 containerd[1596]: time="2025-04-30T00:15:41.491822923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:0,}" Apr 30 00:15:41.495074 containerd[1596]: time="2025-04-30T00:15:41.495042780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:15:41.497345 kubelet[2910]: E0430 00:15:41.496964 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:41.497638 containerd[1596]: time="2025-04-30T00:15:41.497601268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:0,}" Apr 30 00:15:41.498686 containerd[1596]: time="2025-04-30T00:15:41.498649389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:0,}" Apr 30 00:15:41.500240 containerd[1596]: time="2025-04-30T00:15:41.500199730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:15:41.679060 containerd[1596]: time="2025-04-30T00:15:41.678861375Z" level=error msg="Failed to destroy network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:41.679667 containerd[1596]: time="2025-04-30T00:15:41.679624388Z" level=error msg="encountered an error cleaning up failed sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:41.679720 containerd[1596]: time="2025-04-30T00:15:41.679696379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:41.680328 kubelet[2910]: E0430 00:15:41.680235 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:41.680492 kubelet[2910]: E0430 00:15:41.680362 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:41.680492 kubelet[2910]: E0430 00:15:41.680395 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:41.680565 kubelet[2910]: E0430 00:15:41.680498 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:41.682180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045-shm.mount: Deactivated successfully. Apr 30 00:15:42.267618 kubelet[2910]: I0430 00:15:42.267579 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045" Apr 30 00:15:42.270451 containerd[1596]: time="2025-04-30T00:15:42.270412757Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:42.270687 containerd[1596]: time="2025-04-30T00:15:42.270652914Z" level=info msg="Ensure that sandbox 1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045 in task-service has been cleanup successfully" Apr 30 00:15:42.273360 containerd[1596]: time="2025-04-30T00:15:42.270900774Z" level=info msg="TearDown network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" successfully" Apr 30 00:15:42.273360 containerd[1596]: time="2025-04-30T00:15:42.270924638Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" returns successfully" Apr 30 00:15:42.273360 containerd[1596]: time="2025-04-30T00:15:42.272564379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:1,}" Apr 30 00:15:42.273116 systemd[1]: run-netns-cni\x2d8fe9bcc8\x2d4cde\x2d74d5\x2de3ca\x2d20c409dd8853.mount: Deactivated successfully. Apr 30 00:15:43.178401 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:49514.service - OpenSSH per-connection server daemon (10.0.0.1:49514). Apr 30 00:15:43.229969 sshd[3629]: Accepted publickey for core from 10.0.0.1 port 49514 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:43.231547 sshd-session[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:43.237828 systemd-logind[1582]: New session 13 of user core. Apr 30 00:15:43.244509 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:15:43.276828 containerd[1596]: time="2025-04-30T00:15:43.276762860Z" level=error msg="Failed to destroy network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.277313 containerd[1596]: time="2025-04-30T00:15:43.277284761Z" level=error msg="encountered an error cleaning up failed sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.277394 containerd[1596]: time="2025-04-30T00:15:43.277349539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.277662 kubelet[2910]: E0430 00:15:43.277617 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.277968 kubelet[2910]: E0430 00:15:43.277694 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:43.277968 kubelet[2910]: E0430 00:15:43.277722 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:43.277968 kubelet[2910]: E0430 00:15:43.277779 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" podUID="5ad90aba-5f5d-4618-814e-e0df441b2efc" Apr 30 00:15:43.291198 containerd[1596]: time="2025-04-30T00:15:43.291137699Z" level=error msg="Failed to destroy network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.292110 containerd[1596]: time="2025-04-30T00:15:43.292074717Z" level=error msg="encountered an error cleaning up failed sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.292207 containerd[1596]: time="2025-04-30T00:15:43.292146578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.292428 kubelet[2910]: E0430 00:15:43.292386 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.292510 kubelet[2910]: E0430 00:15:43.292458 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:43.292510 kubelet[2910]: E0430 00:15:43.292484 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:43.292578 kubelet[2910]: E0430 00:15:43.292537 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2nh9s" podUID="8d8fc62d-6f2c-4db1-b700-84ab75075a8b" Apr 30 00:15:43.381222 containerd[1596]: time="2025-04-30T00:15:43.381163279Z" level=error msg="Failed to destroy network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.381706 containerd[1596]: time="2025-04-30T00:15:43.381671796Z" level=error msg="encountered an error cleaning up failed sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.381760 containerd[1596]: time="2025-04-30T00:15:43.381735021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.382084 kubelet[2910]: E0430 00:15:43.382037 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.382145 kubelet[2910]: E0430 00:15:43.382104 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:43.382145 kubelet[2910]: E0430 00:15:43.382126 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:43.382205 kubelet[2910]: E0430 00:15:43.382171 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" podUID="c27777f5-4b50-4a4b-8544-5463d251461f" Apr 30 00:15:43.385256 sshd[3680]: Connection closed by 10.0.0.1 port 49514 Apr 30 00:15:43.387493 sshd-session[3629]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:43.391116 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:49514.service: Deactivated successfully. Apr 30 00:15:43.395094 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:15:43.395949 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:15:43.397139 systemd-logind[1582]: Removed session 13. Apr 30 00:15:43.401298 containerd[1596]: time="2025-04-30T00:15:43.401227648Z" level=error msg="Failed to destroy network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.401707 containerd[1596]: time="2025-04-30T00:15:43.401680173Z" level=error msg="encountered an error cleaning up failed sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.401759 containerd[1596]: time="2025-04-30T00:15:43.401743048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.402089 kubelet[2910]: E0430 00:15:43.402036 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.402150 kubelet[2910]: E0430 00:15:43.402118 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:43.402150 kubelet[2910]: E0430 00:15:43.402147 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:43.402271 kubelet[2910]: E0430 00:15:43.402199 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km5z4" podUID="3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed" Apr 30 00:15:43.562932 containerd[1596]: time="2025-04-30T00:15:43.562851424Z" level=error msg="Failed to destroy network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.563322 containerd[1596]: time="2025-04-30T00:15:43.563296464Z" level=error msg="encountered an error cleaning up failed sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.563382 containerd[1596]: time="2025-04-30T00:15:43.563367484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.563698 kubelet[2910]: E0430 00:15:43.563632 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.563698 kubelet[2910]: E0430 00:15:43.563704 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:43.563980 kubelet[2910]: E0430 00:15:43.563728 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:43.563980 kubelet[2910]: E0430 00:15:43.563776 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" podUID="ad18deb4-55a1-4a60-89a6-511214b20063" Apr 30 00:15:43.839045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e-shm.mount: Deactivated successfully. Apr 30 00:15:43.839818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd-shm.mount: Deactivated successfully. Apr 30 00:15:43.840084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca-shm.mount: Deactivated successfully. Apr 30 00:15:43.865184 containerd[1596]: time="2025-04-30T00:15:43.865082379Z" level=error msg="Failed to destroy network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.868187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb-shm.mount: Deactivated successfully. Apr 30 00:15:43.868336 containerd[1596]: time="2025-04-30T00:15:43.868217553Z" level=error msg="encountered an error cleaning up failed sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.868336 containerd[1596]: time="2025-04-30T00:15:43.868286969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.868642 kubelet[2910]: E0430 00:15:43.868593 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:43.868828 kubelet[2910]: E0430 00:15:43.868678 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:43.868828 kubelet[2910]: E0430 00:15:43.868705 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:43.868828 kubelet[2910]: E0430 00:15:43.868771 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:44.269401 kubelet[2910]: I0430 00:15:44.269354 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb" Apr 30 00:15:44.270150 containerd[1596]: time="2025-04-30T00:15:44.270103154Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" Apr 30 00:15:44.270400 containerd[1596]: time="2025-04-30T00:15:44.270359012Z" level=info msg="Ensure that sandbox b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb in task-service has been cleanup successfully" Apr 30 00:15:44.270636 kubelet[2910]: I0430 00:15:44.270452 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca" Apr 30 00:15:44.270936 containerd[1596]: time="2025-04-30T00:15:44.270850871Z" level=info msg="TearDown network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" successfully" Apr 30 00:15:44.270936 containerd[1596]: time="2025-04-30T00:15:44.270871839Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" returns successfully" Apr 30 00:15:44.273030 systemd[1]: run-netns-cni\x2dab9a40c3\x2dde74\x2d16a4\x2d4aec\x2d0d79d0d088d9.mount: Deactivated successfully. Apr 30 00:15:44.273953 containerd[1596]: time="2025-04-30T00:15:44.273876138Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:15:44.274099 containerd[1596]: time="2025-04-30T00:15:44.273967225Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:44.274099 containerd[1596]: time="2025-04-30T00:15:44.274050758Z" level=info msg="TearDown network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" successfully" Apr 30 00:15:44.274099 containerd[1596]: time="2025-04-30T00:15:44.274062639Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" returns successfully" Apr 30 00:15:44.274170 containerd[1596]: time="2025-04-30T00:15:44.274123460Z" level=info msg="Ensure that sandbox 48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca in task-service has been cleanup successfully" Apr 30 00:15:44.274490 containerd[1596]: time="2025-04-30T00:15:44.274367175Z" level=info msg="TearDown network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" successfully" Apr 30 00:15:44.274490 containerd[1596]: time="2025-04-30T00:15:44.274408781Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" returns successfully" Apr 30 00:15:44.274661 kubelet[2910]: I0430 00:15:44.274638 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41" Apr 30 00:15:44.275614 containerd[1596]: time="2025-04-30T00:15:44.275242465Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:15:44.275614 containerd[1596]: time="2025-04-30T00:15:44.275312292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:2,}" Apr 30 00:15:44.275614 containerd[1596]: time="2025-04-30T00:15:44.275401325Z" level=info msg="Ensure that sandbox 91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41 in task-service has been cleanup successfully" Apr 30 00:15:44.275764 containerd[1596]: time="2025-04-30T00:15:44.275739883Z" level=info msg="TearDown network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" successfully" Apr 30 00:15:44.275764 containerd[1596]: time="2025-04-30T00:15:44.275760431Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" returns successfully" Apr 30 00:15:44.275983 kubelet[2910]: E0430 00:15:44.275958 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:44.276193 systemd[1]: run-netns-cni\x2deaec407c\x2da8e5\x2dc730\x2ded92\x2ddd5ff3e0f455.mount: Deactivated successfully. Apr 30 00:15:44.276386 containerd[1596]: time="2025-04-30T00:15:44.276275412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:1,}" Apr 30 00:15:44.276955 containerd[1596]: time="2025-04-30T00:15:44.276404759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:1,}" Apr 30 00:15:44.277268 kubelet[2910]: I0430 00:15:44.277154 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12" Apr 30 00:15:44.278718 containerd[1596]: time="2025-04-30T00:15:44.278107600Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:15:44.278718 containerd[1596]: time="2025-04-30T00:15:44.278386630Z" level=info msg="Ensure that sandbox ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12 in task-service has been cleanup successfully" Apr 30 00:15:44.278718 containerd[1596]: time="2025-04-30T00:15:44.278641595Z" level=info msg="TearDown network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" successfully" Apr 30 00:15:44.278718 containerd[1596]: time="2025-04-30T00:15:44.278667804Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" returns successfully" Apr 30 00:15:44.278878 kubelet[2910]: I0430 00:15:44.278851 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e" Apr 30 00:15:44.280701 kubelet[2910]: I0430 00:15:44.279921 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd" Apr 30 00:15:44.279913 systemd[1]: run-netns-cni\x2d06043f46\x2dde0f\x2d908f\x2dc126\x2d885542eefa99.mount: Deactivated successfully. Apr 30 00:15:44.280787 containerd[1596]: time="2025-04-30T00:15:44.279137622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:15:44.280787 containerd[1596]: time="2025-04-30T00:15:44.279496057Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" Apr 30 00:15:44.280787 containerd[1596]: time="2025-04-30T00:15:44.279738300Z" level=info msg="Ensure that sandbox 1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e in task-service has been cleanup successfully" Apr 30 00:15:44.280787 containerd[1596]: time="2025-04-30T00:15:44.279946230Z" level=info msg="TearDown network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" successfully" Apr 30 00:15:44.280787 containerd[1596]: time="2025-04-30T00:15:44.279959644Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" returns successfully" Apr 30 00:15:44.280787 containerd[1596]: time="2025-04-30T00:15:44.280717649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:15:44.280960 containerd[1596]: time="2025-04-30T00:15:44.280919719Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:15:44.281106 containerd[1596]: time="2025-04-30T00:15:44.281086373Z" level=info msg="Ensure that sandbox 7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd in task-service has been cleanup successfully" Apr 30 00:15:44.281243 containerd[1596]: time="2025-04-30T00:15:44.281225006Z" level=info msg="TearDown network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" successfully" Apr 30 00:15:44.281243 containerd[1596]: time="2025-04-30T00:15:44.281240905Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" returns successfully" Apr 30 00:15:44.281445 kubelet[2910]: E0430 00:15:44.281421 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:44.281701 containerd[1596]: time="2025-04-30T00:15:44.281668256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:1,}" Apr 30 00:15:44.834967 systemd[1]: run-netns-cni\x2d5f482a94\x2d9bc4\x2d0a70\x2dbb78\x2d4940dbf550dc.mount: Deactivated successfully. Apr 30 00:15:44.835154 systemd[1]: run-netns-cni\x2d1c562757\x2dee28\x2d039b\x2d036b\x2d3421f5862827.mount: Deactivated successfully. Apr 30 00:15:44.835305 systemd[1]: run-netns-cni\x2d358272eb\x2d303e\x2d6d3f\x2d14bd\x2d73254ab2a8b3.mount: Deactivated successfully. Apr 30 00:15:46.649159 containerd[1596]: time="2025-04-30T00:15:46.648955998Z" level=error msg="Failed to destroy network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.651979 containerd[1596]: time="2025-04-30T00:15:46.651936900Z" level=error msg="encountered an error cleaning up failed sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.653195 containerd[1596]: time="2025-04-30T00:15:46.653161378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.654098 kubelet[2910]: E0430 00:15:46.654009 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.654514 kubelet[2910]: E0430 00:15:46.654105 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:46.655639 kubelet[2910]: E0430 00:15:46.655498 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:46.662777 kubelet[2910]: E0430 00:15:46.662021 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" podUID="5ad90aba-5f5d-4618-814e-e0df441b2efc" Apr 30 00:15:46.708919 containerd[1596]: time="2025-04-30T00:15:46.707394789Z" level=error msg="Failed to destroy network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.708919 containerd[1596]: time="2025-04-30T00:15:46.708239850Z" level=error msg="encountered an error cleaning up failed sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.708919 containerd[1596]: time="2025-04-30T00:15:46.708404152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.709192 kubelet[2910]: E0430 00:15:46.708778 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.709192 kubelet[2910]: E0430 00:15:46.708849 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:46.709192 kubelet[2910]: E0430 00:15:46.708873 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:46.709295 kubelet[2910]: E0430 00:15:46.709063 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" podUID="c27777f5-4b50-4a4b-8544-5463d251461f" Apr 30 00:15:46.728419 containerd[1596]: time="2025-04-30T00:15:46.728294259Z" level=error msg="Failed to destroy network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.729140 containerd[1596]: time="2025-04-30T00:15:46.729067860Z" level=error msg="encountered an error cleaning up failed sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.729303 containerd[1596]: time="2025-04-30T00:15:46.729156422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.729579 kubelet[2910]: E0430 00:15:46.729515 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.729672 kubelet[2910]: E0430 00:15:46.729605 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:46.729672 kubelet[2910]: E0430 00:15:46.729633 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:46.729814 kubelet[2910]: E0430 00:15:46.729691 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2nh9s" podUID="8d8fc62d-6f2c-4db1-b700-84ab75075a8b" Apr 30 00:15:46.731026 containerd[1596]: time="2025-04-30T00:15:46.730982174Z" level=error msg="Failed to destroy network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.732491 containerd[1596]: time="2025-04-30T00:15:46.731400673Z" level=error msg="encountered an error cleaning up failed sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.732491 containerd[1596]: time="2025-04-30T00:15:46.731459851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.732606 kubelet[2910]: E0430 00:15:46.731953 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.732606 kubelet[2910]: E0430 00:15:46.732052 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:46.732606 kubelet[2910]: E0430 00:15:46.732085 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:46.732705 kubelet[2910]: E0430 00:15:46.732144 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km5z4" podUID="3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed" Apr 30 00:15:46.733631 containerd[1596]: time="2025-04-30T00:15:46.733570145Z" level=error msg="Failed to destroy network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.734189 containerd[1596]: time="2025-04-30T00:15:46.734159999Z" level=error msg="encountered an error cleaning up failed sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.734271 containerd[1596]: time="2025-04-30T00:15:46.734236950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.734653 kubelet[2910]: E0430 00:15:46.734608 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.734653 kubelet[2910]: E0430 00:15:46.734652 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:46.734772 kubelet[2910]: E0430 00:15:46.734671 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:46.734772 kubelet[2910]: E0430 00:15:46.734709 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:46.741432 containerd[1596]: time="2025-04-30T00:15:46.741353625Z" level=error msg="Failed to destroy network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.741918 containerd[1596]: time="2025-04-30T00:15:46.741865395Z" level=error msg="encountered an error cleaning up failed sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.741980 containerd[1596]: time="2025-04-30T00:15:46.741955140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.742313 kubelet[2910]: E0430 00:15:46.742252 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:46.742461 kubelet[2910]: E0430 00:15:46.742336 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:46.742461 kubelet[2910]: E0430 00:15:46.742362 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:46.742461 kubelet[2910]: E0430 00:15:46.742420 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" podUID="ad18deb4-55a1-4a60-89a6-511214b20063" Apr 30 00:15:47.288240 kubelet[2910]: I0430 00:15:47.288191 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282" Apr 30 00:15:47.289043 containerd[1596]: time="2025-04-30T00:15:47.289004020Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" Apr 30 00:15:47.289567 containerd[1596]: time="2025-04-30T00:15:47.289418071Z" level=info msg="Ensure that sandbox b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282 in task-service has been cleanup successfully" Apr 30 00:15:47.289726 containerd[1596]: time="2025-04-30T00:15:47.289665617Z" level=info msg="TearDown network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" successfully" Apr 30 00:15:47.289726 containerd[1596]: time="2025-04-30T00:15:47.289682868Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" returns successfully" Apr 30 00:15:47.290185 containerd[1596]: time="2025-04-30T00:15:47.290112248Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:15:47.290274 containerd[1596]: time="2025-04-30T00:15:47.290233772Z" level=info msg="TearDown network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" successfully" Apr 30 00:15:47.290274 containerd[1596]: time="2025-04-30T00:15:47.290251825Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" returns successfully" Apr 30 00:15:47.290414 kubelet[2910]: I0430 00:15:47.290383 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0" Apr 30 00:15:47.291036 containerd[1596]: time="2025-04-30T00:15:47.291006864Z" level=info msg="StopPodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\"" Apr 30 00:15:47.291132 kubelet[2910]: E0430 00:15:47.291052 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:47.291311 containerd[1596]: time="2025-04-30T00:15:47.291205761Z" level=info msg="Ensure that sandbox 55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0 in task-service has been cleanup successfully" Apr 30 00:15:47.291521 containerd[1596]: time="2025-04-30T00:15:47.291385061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:2,}" Apr 30 00:15:47.291683 containerd[1596]: time="2025-04-30T00:15:47.291639780Z" level=info msg="TearDown network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" successfully" Apr 30 00:15:47.291683 containerd[1596]: time="2025-04-30T00:15:47.291658193Z" level=info msg="StopPodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" returns successfully" Apr 30 00:15:47.292227 containerd[1596]: time="2025-04-30T00:15:47.292151440Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" Apr 30 00:15:47.292329 containerd[1596]: time="2025-04-30T00:15:47.292304533Z" level=info msg="TearDown network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" successfully" Apr 30 00:15:47.292329 containerd[1596]: time="2025-04-30T00:15:47.292322325Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" returns successfully" Apr 30 00:15:47.292755 containerd[1596]: time="2025-04-30T00:15:47.292724866Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:47.292874 containerd[1596]: time="2025-04-30T00:15:47.292835760Z" level=info msg="TearDown network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" successfully" Apr 30 00:15:47.292874 containerd[1596]: time="2025-04-30T00:15:47.292860746Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" returns successfully" Apr 30 00:15:47.292874 containerd[1596]: time="2025-04-30T00:15:47.293339156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:3,}" Apr 30 00:15:47.293645 containerd[1596]: time="2025-04-30T00:15:47.293598404Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" Apr 30 00:15:47.293689 kubelet[2910]: I0430 00:15:47.293061 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38" Apr 30 00:15:47.293869 containerd[1596]: time="2025-04-30T00:15:47.293832404Z" level=info msg="Ensure that sandbox 69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38 in task-service has been cleanup successfully" Apr 30 00:15:47.294135 containerd[1596]: time="2025-04-30T00:15:47.294096670Z" level=info msg="TearDown network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" successfully" Apr 30 00:15:47.294135 containerd[1596]: time="2025-04-30T00:15:47.294123510Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" returns successfully" Apr 30 00:15:47.294478 containerd[1596]: time="2025-04-30T00:15:47.294448158Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:15:47.294556 containerd[1596]: time="2025-04-30T00:15:47.294541419Z" level=info msg="TearDown network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" successfully" Apr 30 00:15:47.294579 containerd[1596]: time="2025-04-30T00:15:47.294556276Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" returns successfully" Apr 30 00:15:47.294807 kubelet[2910]: I0430 00:15:47.294781 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2" Apr 30 00:15:47.295023 containerd[1596]: time="2025-04-30T00:15:47.294990144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:2,}" Apr 30 00:15:47.295578 containerd[1596]: time="2025-04-30T00:15:47.295248400Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" Apr 30 00:15:47.295578 containerd[1596]: time="2025-04-30T00:15:47.295423943Z" level=info msg="Ensure that sandbox 53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2 in task-service has been cleanup successfully" Apr 30 00:15:47.295698 containerd[1596]: time="2025-04-30T00:15:47.295674454Z" level=info msg="TearDown network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" successfully" Apr 30 00:15:47.295771 containerd[1596]: time="2025-04-30T00:15:47.295750023Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" returns successfully" Apr 30 00:15:47.296232 containerd[1596]: time="2025-04-30T00:15:47.295975558Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:15:47.296232 containerd[1596]: time="2025-04-30T00:15:47.296074951Z" level=info msg="TearDown network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" successfully" Apr 30 00:15:47.296232 containerd[1596]: time="2025-04-30T00:15:47.296095809Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" returns successfully" Apr 30 00:15:47.296339 kubelet[2910]: I0430 00:15:47.296180 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f" Apr 30 00:15:47.296339 kubelet[2910]: E0430 00:15:47.296331 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:47.296571 containerd[1596]: time="2025-04-30T00:15:47.296540487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:2,}" Apr 30 00:15:47.296747 containerd[1596]: time="2025-04-30T00:15:47.296722281Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" Apr 30 00:15:47.296942 containerd[1596]: time="2025-04-30T00:15:47.296868050Z" level=info msg="Ensure that sandbox 72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f in task-service has been cleanup successfully" Apr 30 00:15:47.297167 containerd[1596]: time="2025-04-30T00:15:47.297094436Z" level=info msg="TearDown network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" successfully" Apr 30 00:15:47.297167 containerd[1596]: time="2025-04-30T00:15:47.297107421Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" returns successfully" Apr 30 00:15:47.297507 containerd[1596]: time="2025-04-30T00:15:47.297474356Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:15:47.297573 containerd[1596]: time="2025-04-30T00:15:47.297556247Z" level=info msg="TearDown network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" successfully" Apr 30 00:15:47.297573 containerd[1596]: time="2025-04-30T00:15:47.297567326Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" returns successfully" Apr 30 00:15:47.297721 kubelet[2910]: I0430 00:15:47.297671 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee" Apr 30 00:15:47.297921 containerd[1596]: time="2025-04-30T00:15:47.297877417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:2,}" Apr 30 00:15:47.298474 containerd[1596]: time="2025-04-30T00:15:47.298164055Z" level=info msg="StopPodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\"" Apr 30 00:15:47.298474 containerd[1596]: time="2025-04-30T00:15:47.298348705Z" level=info msg="Ensure that sandbox 0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee in task-service has been cleanup successfully" Apr 30 00:15:47.298611 containerd[1596]: time="2025-04-30T00:15:47.298589418Z" level=info msg="TearDown network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" successfully" Apr 30 00:15:47.298611 containerd[1596]: time="2025-04-30T00:15:47.298610095Z" level=info msg="StopPodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" returns successfully" Apr 30 00:15:47.298934 containerd[1596]: time="2025-04-30T00:15:47.298861949Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" Apr 30 00:15:47.299006 containerd[1596]: time="2025-04-30T00:15:47.298990837Z" level=info msg="TearDown network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" successfully" Apr 30 00:15:47.299035 containerd[1596]: time="2025-04-30T00:15:47.299007497Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" returns successfully" Apr 30 00:15:47.299465 containerd[1596]: time="2025-04-30T00:15:47.299425025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:2,}" Apr 30 00:15:47.521929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2-shm.mount: Deactivated successfully. Apr 30 00:15:47.522148 systemd[1]: run-netns-cni\x2d56ab6594\x2d4f1c\x2d813e\x2d64a6\x2d13870be7823e.mount: Deactivated successfully. Apr 30 00:15:47.522297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0-shm.mount: Deactivated successfully. Apr 30 00:15:47.522454 systemd[1]: run-netns-cni\x2dfe617e3d\x2d26c5\x2d6060\x2d5133\x2d0bcea6da35b4.mount: Deactivated successfully. Apr 30 00:15:47.522592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38-shm.mount: Deactivated successfully. Apr 30 00:15:48.398312 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:35228.service - OpenSSH per-connection server daemon (10.0.0.1:35228). Apr 30 00:15:48.471074 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 35228 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:48.472864 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:48.477309 systemd-logind[1582]: New session 14 of user core. Apr 30 00:15:48.490246 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:15:48.645247 sshd[4094]: Connection closed by 10.0.0.1 port 35228 Apr 30 00:15:48.645651 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:48.649532 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:35228.service: Deactivated successfully. Apr 30 00:15:48.652282 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:15:48.652289 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:15:48.653618 systemd-logind[1582]: Removed session 14. Apr 30 00:15:49.151628 containerd[1596]: time="2025-04-30T00:15:49.151574257Z" level=error msg="Failed to destroy network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.152528 containerd[1596]: time="2025-04-30T00:15:49.152473649Z" level=error msg="encountered an error cleaning up failed sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.152528 containerd[1596]: time="2025-04-30T00:15:49.152537767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.153020 kubelet[2910]: E0430 00:15:49.152959 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.153426 kubelet[2910]: E0430 00:15:49.153053 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:49.153426 kubelet[2910]: E0430 00:15:49.153081 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:15:49.153426 kubelet[2910]: E0430 00:15:49.153131 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:15:49.188915 containerd[1596]: time="2025-04-30T00:15:49.188825410Z" level=error msg="Failed to destroy network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.189271 containerd[1596]: time="2025-04-30T00:15:49.189245878Z" level=error msg="encountered an error cleaning up failed sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.189324 containerd[1596]: time="2025-04-30T00:15:49.189303123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.189768 kubelet[2910]: E0430 00:15:49.189559 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.189768 kubelet[2910]: E0430 00:15:49.189636 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:49.189768 kubelet[2910]: E0430 00:15:49.189667 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:15:49.189909 kubelet[2910]: E0430 00:15:49.189719 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" podUID="5ad90aba-5f5d-4618-814e-e0df441b2efc" Apr 30 00:15:49.302940 kubelet[2910]: I0430 00:15:49.302902 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476" Apr 30 00:15:49.303567 containerd[1596]: time="2025-04-30T00:15:49.303531288Z" level=info msg="StopPodSandbox for \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\"" Apr 30 00:15:49.303780 containerd[1596]: time="2025-04-30T00:15:49.303726358Z" level=info msg="Ensure that sandbox d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476 in task-service has been cleanup successfully" Apr 30 00:15:49.304060 containerd[1596]: time="2025-04-30T00:15:49.304025281Z" level=info msg="TearDown network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\" successfully" Apr 30 00:15:49.304060 containerd[1596]: time="2025-04-30T00:15:49.304043524Z" level=info msg="StopPodSandbox for \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\" returns successfully" Apr 30 00:15:49.304566 containerd[1596]: time="2025-04-30T00:15:49.304520686Z" level=info msg="StopPodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\"" Apr 30 00:15:49.304766 containerd[1596]: time="2025-04-30T00:15:49.304707742Z" level=info msg="TearDown network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" successfully" Apr 30 00:15:49.304766 containerd[1596]: time="2025-04-30T00:15:49.304737998Z" level=info msg="StopPodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" returns successfully" Apr 30 00:15:49.305210 containerd[1596]: time="2025-04-30T00:15:49.305094006Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" Apr 30 00:15:49.305210 containerd[1596]: time="2025-04-30T00:15:49.305185004Z" level=info msg="TearDown network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" successfully" Apr 30 00:15:49.305210 containerd[1596]: time="2025-04-30T00:15:49.305195073Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" returns successfully" Apr 30 00:15:49.305442 kubelet[2910]: I0430 00:15:49.305154 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac" Apr 30 00:15:49.305474 containerd[1596]: time="2025-04-30T00:15:49.305429135Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:49.305538 containerd[1596]: time="2025-04-30T00:15:49.305525333Z" level=info msg="TearDown network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" successfully" Apr 30 00:15:49.305605 containerd[1596]: time="2025-04-30T00:15:49.305536203Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" returns successfully" Apr 30 00:15:49.305921 containerd[1596]: time="2025-04-30T00:15:49.305877404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:4,}" Apr 30 00:15:49.305956 containerd[1596]: time="2025-04-30T00:15:49.305902751Z" level=info msg="StopPodSandbox for \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\"" Apr 30 00:15:49.306118 containerd[1596]: time="2025-04-30T00:15:49.306101738Z" level=info msg="Ensure that sandbox 4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac in task-service has been cleanup successfully" Apr 30 00:15:49.306276 containerd[1596]: time="2025-04-30T00:15:49.306251765Z" level=info msg="TearDown network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\" successfully" Apr 30 00:15:49.306276 containerd[1596]: time="2025-04-30T00:15:49.306270521Z" level=info msg="StopPodSandbox for \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\" returns successfully" Apr 30 00:15:49.306569 containerd[1596]: time="2025-04-30T00:15:49.306526965Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" Apr 30 00:15:49.306673 containerd[1596]: time="2025-04-30T00:15:49.306652556Z" level=info msg="TearDown network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" successfully" Apr 30 00:15:49.306698 containerd[1596]: time="2025-04-30T00:15:49.306672343Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" returns successfully" Apr 30 00:15:49.306993 containerd[1596]: time="2025-04-30T00:15:49.306955517Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:15:49.307104 containerd[1596]: time="2025-04-30T00:15:49.307078964Z" level=info msg="TearDown network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" successfully" Apr 30 00:15:49.307133 containerd[1596]: time="2025-04-30T00:15:49.307103851Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" returns successfully" Apr 30 00:15:49.307480 containerd[1596]: time="2025-04-30T00:15:49.307447826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:3,}" Apr 30 00:15:49.421104 containerd[1596]: time="2025-04-30T00:15:49.420923470Z" level=error msg="Failed to destroy network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.421423 containerd[1596]: time="2025-04-30T00:15:49.421382848Z" level=error msg="encountered an error cleaning up failed sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.421487 containerd[1596]: time="2025-04-30T00:15:49.421456505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.421849 kubelet[2910]: E0430 00:15:49.421780 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.422068 kubelet[2910]: E0430 00:15:49.421879 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:49.422068 kubelet[2910]: E0430 00:15:49.421967 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:15:49.422068 kubelet[2910]: E0430 00:15:49.422031 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2nh9s" podUID="8d8fc62d-6f2c-4db1-b700-84ab75075a8b" Apr 30 00:15:49.509300 containerd[1596]: time="2025-04-30T00:15:49.509234638Z" level=error msg="Failed to destroy network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.509784 containerd[1596]: time="2025-04-30T00:15:49.509743218Z" level=error msg="encountered an error cleaning up failed sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.509927 containerd[1596]: time="2025-04-30T00:15:49.509818777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.510162 kubelet[2910]: E0430 00:15:49.510103 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.510212 kubelet[2910]: E0430 00:15:49.510182 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:49.510212 kubelet[2910]: E0430 00:15:49.510205 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:15:49.510285 kubelet[2910]: E0430 00:15:49.510256 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km5z4" podUID="3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed" Apr 30 00:15:49.589074 containerd[1596]: time="2025-04-30T00:15:49.589001935Z" level=error msg="Failed to destroy network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.589471 containerd[1596]: time="2025-04-30T00:15:49.589436398Z" level=error msg="encountered an error cleaning up failed sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.589517 containerd[1596]: time="2025-04-30T00:15:49.589503743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.589850 kubelet[2910]: E0430 00:15:49.589790 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:49.589977 kubelet[2910]: E0430 00:15:49.589877 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:49.589977 kubelet[2910]: E0430 00:15:49.589925 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:15:49.590044 kubelet[2910]: E0430 00:15:49.590006 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" podUID="ad18deb4-55a1-4a60-89a6-511214b20063" Apr 30 00:15:49.879210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3-shm.mount: Deactivated successfully. Apr 30 00:15:49.879404 systemd[1]: run-netns-cni\x2d58f4f10a\x2d16ba\x2dd924\x2ddd61\x2d7184b642a006.mount: Deactivated successfully. Apr 30 00:15:49.879549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac-shm.mount: Deactivated successfully. Apr 30 00:15:49.879695 systemd[1]: run-netns-cni\x2d59b847cf\x2d5fec\x2d2b9a\x2d8dc8\x2d578222daa421.mount: Deactivated successfully. Apr 30 00:15:49.879848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476-shm.mount: Deactivated successfully. Apr 30 00:15:50.308837 kubelet[2910]: I0430 00:15:50.308802 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036" Apr 30 00:15:50.309451 containerd[1596]: time="2025-04-30T00:15:50.309421341Z" level=info msg="StopPodSandbox for \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\"" Apr 30 00:15:50.309735 containerd[1596]: time="2025-04-30T00:15:50.309630738Z" level=info msg="Ensure that sandbox c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036 in task-service has been cleanup successfully" Apr 30 00:15:50.310113 containerd[1596]: time="2025-04-30T00:15:50.309990455Z" level=info msg="TearDown network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\" successfully" Apr 30 00:15:50.310203 containerd[1596]: time="2025-04-30T00:15:50.310078118Z" level=info msg="StopPodSandbox for \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\" returns successfully" Apr 30 00:15:50.310461 containerd[1596]: time="2025-04-30T00:15:50.310440759Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" Apr 30 00:15:50.310622 containerd[1596]: time="2025-04-30T00:15:50.310519485Z" level=info msg="TearDown network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" successfully" Apr 30 00:15:50.310622 containerd[1596]: time="2025-04-30T00:15:50.310531247Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" returns successfully" Apr 30 00:15:50.310870 containerd[1596]: time="2025-04-30T00:15:50.310850668Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:15:50.310964 containerd[1596]: time="2025-04-30T00:15:50.310948329Z" level=info msg="TearDown network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" successfully" Apr 30 00:15:50.310964 containerd[1596]: time="2025-04-30T00:15:50.310961774Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" returns successfully" Apr 30 00:15:50.311162 kubelet[2910]: E0430 00:15:50.311143 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:50.311341 kubelet[2910]: I0430 00:15:50.311326 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3" Apr 30 00:15:50.312167 containerd[1596]: time="2025-04-30T00:15:50.311781552Z" level=info msg="StopPodSandbox for \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\"" Apr 30 00:15:50.312167 containerd[1596]: time="2025-04-30T00:15:50.311866129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:3,}" Apr 30 00:15:50.312167 containerd[1596]: time="2025-04-30T00:15:50.312047094Z" level=info msg="Ensure that sandbox 7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3 in task-service has been cleanup successfully" Apr 30 00:15:50.312341 containerd[1596]: time="2025-04-30T00:15:50.312280035Z" level=info msg="TearDown network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\" successfully" Apr 30 00:15:50.312341 containerd[1596]: time="2025-04-30T00:15:50.312295494Z" level=info msg="StopPodSandbox for \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\" returns successfully" Apr 30 00:15:50.312302 systemd[1]: run-netns-cni\x2d3c373a3d\x2dba0c\x2dbcd9\x2d309d\x2de93cd33c969c.mount: Deactivated successfully. Apr 30 00:15:50.312868 containerd[1596]: time="2025-04-30T00:15:50.312754103Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" Apr 30 00:15:50.312868 containerd[1596]: time="2025-04-30T00:15:50.312824804Z" level=info msg="TearDown network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" successfully" Apr 30 00:15:50.312868 containerd[1596]: time="2025-04-30T00:15:50.312834121Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" returns successfully" Apr 30 00:15:50.313534 containerd[1596]: time="2025-04-30T00:15:50.313221479Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:15:50.313534 containerd[1596]: time="2025-04-30T00:15:50.313300575Z" level=info msg="TearDown network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" successfully" Apr 30 00:15:50.313534 containerd[1596]: time="2025-04-30T00:15:50.313309231Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" returns successfully" Apr 30 00:15:50.313625 kubelet[2910]: E0430 00:15:50.313440 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:15:50.313880 containerd[1596]: time="2025-04-30T00:15:50.313759004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:3,}" Apr 30 00:15:50.314519 kubelet[2910]: I0430 00:15:50.314503 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34" Apr 30 00:15:50.315021 containerd[1596]: time="2025-04-30T00:15:50.314988231Z" level=info msg="StopPodSandbox for \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\"" Apr 30 00:15:50.315165 containerd[1596]: time="2025-04-30T00:15:50.315144089Z" level=info msg="Ensure that sandbox 62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34 in task-service has been cleanup successfully" Apr 30 00:15:50.315236 systemd[1]: run-netns-cni\x2d479446a1\x2d2a58\x2da52b\x2dc885\x2d9cc7b3b1cb49.mount: Deactivated successfully. Apr 30 00:15:50.315337 containerd[1596]: time="2025-04-30T00:15:50.315316799Z" level=info msg="TearDown network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\" successfully" Apr 30 00:15:50.315364 containerd[1596]: time="2025-04-30T00:15:50.315336315Z" level=info msg="StopPodSandbox for \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\" returns successfully" Apr 30 00:15:50.315598 containerd[1596]: time="2025-04-30T00:15:50.315575268Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" Apr 30 00:15:50.315678 containerd[1596]: time="2025-04-30T00:15:50.315659965Z" level=info msg="TearDown network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" successfully" Apr 30 00:15:50.315678 containerd[1596]: time="2025-04-30T00:15:50.315675012Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" returns successfully" Apr 30 00:15:50.316393 containerd[1596]: time="2025-04-30T00:15:50.316001567Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:15:50.316393 containerd[1596]: time="2025-04-30T00:15:50.316117312Z" level=info msg="TearDown network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" successfully" Apr 30 00:15:50.316393 containerd[1596]: time="2025-04-30T00:15:50.316129403Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" returns successfully" Apr 30 00:15:50.316627 containerd[1596]: time="2025-04-30T00:15:50.316587842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:3,}" Apr 30 00:15:50.317998 systemd[1]: run-netns-cni\x2dad27f55f\x2de06b\x2defb6\x2df784\x2d687f035c0d01.mount: Deactivated successfully. Apr 30 00:15:53.259489 containerd[1596]: time="2025-04-30T00:15:53.259429162Z" level=error msg="Failed to destroy network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:53.260047 containerd[1596]: time="2025-04-30T00:15:53.259859663Z" level=error msg="encountered an error cleaning up failed sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:53.260047 containerd[1596]: time="2025-04-30T00:15:53.259932749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:53.260173 kubelet[2910]: E0430 00:15:53.260141 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:15:53.260747 kubelet[2910]: E0430 00:15:53.260194 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:53.260747 kubelet[2910]: E0430 00:15:53.260215 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:15:53.260747 kubelet[2910]: E0430 00:15:53.260265 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" podUID="c27777f5-4b50-4a4b-8544-5463d251461f" Apr 30 00:15:53.262347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57-shm.mount: Deactivated successfully. Apr 30 00:15:53.321936 kubelet[2910]: I0430 00:15:53.321901 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57" Apr 30 00:15:53.322452 containerd[1596]: time="2025-04-30T00:15:53.322417131Z" level=info msg="StopPodSandbox for \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\"" Apr 30 00:15:53.322658 containerd[1596]: time="2025-04-30T00:15:53.322629608Z" level=info msg="Ensure that sandbox 8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57 in task-service has been cleanup successfully" Apr 30 00:15:53.322878 containerd[1596]: time="2025-04-30T00:15:53.322853615Z" level=info msg="TearDown network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\" successfully" Apr 30 00:15:53.322878 containerd[1596]: time="2025-04-30T00:15:53.322875615Z" level=info msg="StopPodSandbox for \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\" returns successfully" Apr 30 00:15:53.324373 containerd[1596]: time="2025-04-30T00:15:53.324144199Z" level=info msg="StopPodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\"" Apr 30 00:15:53.324373 containerd[1596]: time="2025-04-30T00:15:53.324234267Z" level=info msg="TearDown network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" successfully" Apr 30 00:15:53.324373 containerd[1596]: time="2025-04-30T00:15:53.324248022Z" level=info msg="StopPodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" returns successfully" Apr 30 00:15:53.324818 containerd[1596]: time="2025-04-30T00:15:53.324640413Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" Apr 30 00:15:53.324818 containerd[1596]: time="2025-04-30T00:15:53.324742223Z" level=info msg="TearDown network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" successfully" Apr 30 00:15:53.324818 containerd[1596]: time="2025-04-30T00:15:53.324757932Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" returns successfully" Apr 30 00:15:53.325016 systemd[1]: run-netns-cni\x2d4357fcc0\x2d37bc\x2d3ae1\x2dbf30\x2d260f3f182be5.mount: Deactivated successfully. Apr 30 00:15:53.325286 containerd[1596]: time="2025-04-30T00:15:53.325263814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:3,}" Apr 30 00:15:53.657187 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:35232.service - OpenSSH per-connection server daemon (10.0.0.1:35232). Apr 30 00:15:53.738374 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 35232 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:53.740416 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:53.747996 systemd-logind[1582]: New session 15 of user core. Apr 30 00:15:53.752303 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:15:54.700035 systemd-resolved[1468]: Under memory pressure, flushing caches. Apr 30 00:15:54.702276 systemd-journald[1158]: Under memory pressure, flushing caches. Apr 30 00:15:54.700083 systemd-resolved[1468]: Flushed all caches. Apr 30 00:15:54.874090 sshd[4337]: Connection closed by 10.0.0.1 port 35232 Apr 30 00:15:54.874473 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Apr 30 00:15:54.879850 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:35232.service: Deactivated successfully. Apr 30 00:15:54.884260 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:15:54.885292 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:15:54.887099 systemd-logind[1582]: Removed session 15. Apr 30 00:15:57.567954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542291726.mount: Deactivated successfully. Apr 30 00:15:59.039650 containerd[1596]: time="2025-04-30T00:15:59.039592591Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:59.049025 containerd[1596]: time="2025-04-30T00:15:59.039738285Z" level=info msg="TearDown network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" successfully" Apr 30 00:15:59.049025 containerd[1596]: time="2025-04-30T00:15:59.049001550Z" level=info msg="StopPodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" returns successfully" Apr 30 00:15:59.049610 containerd[1596]: time="2025-04-30T00:15:59.049505538Z" level=info msg="RemovePodSandbox for \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:59.055534 containerd[1596]: time="2025-04-30T00:15:59.055481441Z" level=info msg="Forcibly stopping sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\"" Apr 30 00:15:59.055806 containerd[1596]: time="2025-04-30T00:15:59.055612647Z" level=info msg="TearDown network for sandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" successfully" Apr 30 00:15:59.887233 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:54826.service - OpenSSH per-connection server daemon (10.0.0.1:54826). Apr 30 00:15:59.969342 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 54826 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:15:59.971116 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:15:59.975860 systemd-logind[1582]: New session 16 of user core. Apr 30 00:15:59.986458 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:16:00.262489 sshd[4355]: Connection closed by 10.0.0.1 port 54826 Apr 30 00:16:00.262955 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:00.274210 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:54840.service - OpenSSH per-connection server daemon (10.0.0.1:54840). Apr 30 00:16:00.275109 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:54826.service: Deactivated successfully. Apr 30 00:16:00.278097 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:16:00.280620 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:16:00.281793 systemd-logind[1582]: Removed session 16. Apr 30 00:16:00.313967 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:00.316080 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:00.321131 systemd-logind[1582]: New session 17 of user core. Apr 30 00:16:00.329345 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:16:01.405013 sshd[4372]: Connection closed by 10.0.0.1 port 54840 Apr 30 00:16:01.405676 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:01.419239 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). Apr 30 00:16:01.419806 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:54840.service: Deactivated successfully. Apr 30 00:16:01.422939 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:16:01.424097 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:16:01.425681 systemd-logind[1582]: Removed session 17. Apr 30 00:16:01.461908 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:01.463841 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:01.468854 systemd-logind[1582]: New session 18 of user core. Apr 30 00:16:01.480194 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:16:02.052208 sshd[4385]: Connection closed by 10.0.0.1 port 54846 Apr 30 00:16:02.052593 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:02.057077 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:54846.service: Deactivated successfully. Apr 30 00:16:02.059656 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:16:02.059704 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:16:02.060945 systemd-logind[1582]: Removed session 18. Apr 30 00:16:02.687016 containerd[1596]: time="2025-04-30T00:16:02.686954439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:02.687712 containerd[1596]: time="2025-04-30T00:16:02.687039540Z" level=info msg="RemovePodSandbox \"1e46af9c6fff26e454299b39a37cb3ac8c2def7c2f22f0d19bf7c86d6d7d9045\" returns successfully" Apr 30 00:16:02.687848 containerd[1596]: time="2025-04-30T00:16:02.687790628Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" Apr 30 00:16:02.688002 containerd[1596]: time="2025-04-30T00:16:02.687969287Z" level=info msg="TearDown network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" successfully" Apr 30 00:16:02.688002 containerd[1596]: time="2025-04-30T00:16:02.687990526Z" level=info msg="StopPodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" returns successfully" Apr 30 00:16:02.688474 containerd[1596]: time="2025-04-30T00:16:02.688443171Z" level=info msg="RemovePodSandbox for \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" Apr 30 00:16:02.688559 containerd[1596]: time="2025-04-30T00:16:02.688486604Z" level=info msg="Forcibly stopping sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\"" Apr 30 00:16:02.688659 containerd[1596]: time="2025-04-30T00:16:02.688606119Z" level=info msg="TearDown network for sandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" successfully" Apr 30 00:16:02.790011 containerd[1596]: time="2025-04-30T00:16:02.789962430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:02.847075 containerd[1596]: time="2025-04-30T00:16:02.818195128Z" level=error msg="Failed to destroy network for sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.847559 containerd[1596]: time="2025-04-30T00:16:02.847495772Z" level=error msg="encountered an error cleaning up failed sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.847632 containerd[1596]: time="2025-04-30T00:16:02.847596351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.847956 kubelet[2910]: E0430 00:16:02.847880 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.848482 kubelet[2910]: E0430 00:16:02.847980 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:16:02.848482 kubelet[2910]: E0430 00:16:02.848004 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t95w" Apr 30 00:16:02.848482 kubelet[2910]: E0430 00:16:02.848052 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4t95w_calico-system(60f4275e-2eec-4a29-a8cc-8e6f60dbe335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4t95w" podUID="60f4275e-2eec-4a29-a8cc-8e6f60dbe335" Apr 30 00:16:02.880821 containerd[1596]: time="2025-04-30T00:16:02.880761968Z" level=error msg="Failed to destroy network for sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.881411 containerd[1596]: time="2025-04-30T00:16:02.881372641Z" level=error msg="encountered an error cleaning up failed sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.881411 containerd[1596]: time="2025-04-30T00:16:02.881430330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.881804 kubelet[2910]: E0430 00:16:02.881747 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.881914 kubelet[2910]: E0430 00:16:02.881834 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:16:02.881914 kubelet[2910]: E0430 00:16:02.881865 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km5z4" Apr 30 00:16:02.882041 kubelet[2910]: E0430 00:16:02.881982 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-km5z4_kube-system(3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km5z4" podUID="3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed" Apr 30 00:16:02.918614 containerd[1596]: time="2025-04-30T00:16:02.918440420Z" level=error msg="Failed to destroy network for sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.918981 containerd[1596]: time="2025-04-30T00:16:02.918944692Z" level=error msg="encountered an error cleaning up failed sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.919046 containerd[1596]: time="2025-04-30T00:16:02.919019935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.919342 kubelet[2910]: E0430 00:16:02.919279 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.919342 kubelet[2910]: E0430 00:16:02.919355 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:16:02.919553 kubelet[2910]: E0430 00:16:02.919378 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" Apr 30 00:16:02.919553 kubelet[2910]: E0430 00:16:02.919429 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd77dd9ff-bqkxv_calico-system(5ad90aba-5f5d-4618-814e-e0df441b2efc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" podUID="5ad90aba-5f5d-4618-814e-e0df441b2efc" Apr 30 00:16:02.956710 containerd[1596]: time="2025-04-30T00:16:02.955477890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 00:16:02.982068 containerd[1596]: time="2025-04-30T00:16:02.982013661Z" level=error msg="Failed to destroy network for sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.982498 containerd[1596]: time="2025-04-30T00:16:02.982454103Z" level=error msg="encountered an error cleaning up failed sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.982565 containerd[1596]: time="2025-04-30T00:16:02.982546788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.983243 kubelet[2910]: E0430 00:16:02.982789 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.983243 kubelet[2910]: E0430 00:16:02.982860 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:16:02.983243 kubelet[2910]: E0430 00:16:02.982903 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nh9s" Apr 30 00:16:02.983429 kubelet[2910]: E0430 00:16:02.982965 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2nh9s_kube-system(8d8fc62d-6f2c-4db1-b700-84ab75075a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2nh9s" podUID="8d8fc62d-6f2c-4db1-b700-84ab75075a8b" Apr 30 00:16:02.992465 containerd[1596]: time="2025-04-30T00:16:02.992394893Z" level=error msg="Failed to destroy network for sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.992824 containerd[1596]: time="2025-04-30T00:16:02.992793676Z" level=error msg="encountered an error cleaning up failed sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.992961 containerd[1596]: time="2025-04-30T00:16:02.992850163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.993253 kubelet[2910]: E0430 00:16:02.993108 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:02.993253 kubelet[2910]: E0430 00:16:02.993188 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:16:02.993253 kubelet[2910]: E0430 00:16:02.993212 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" Apr 30 00:16:02.993594 kubelet[2910]: E0430 00:16:02.993502 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-5c5xv_calico-apiserver(c27777f5-4b50-4a4b-8544-5463d251461f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" podUID="c27777f5-4b50-4a4b-8544-5463d251461f" Apr 30 00:16:02.996137 containerd[1596]: time="2025-04-30T00:16:02.995980136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:02.996137 containerd[1596]: time="2025-04-30T00:16:02.996050639Z" level=info msg="RemovePodSandbox \"b054eb0dd66c9f129715aeda4f57f400871a1cadc5f1e61865c98f9f0381badb\" returns successfully" Apr 30 00:16:02.996580 containerd[1596]: time="2025-04-30T00:16:02.996523593Z" level=info msg="StopPodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\"" Apr 30 00:16:02.996629 containerd[1596]: time="2025-04-30T00:16:02.996617310Z" level=info msg="TearDown network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" successfully" Apr 30 00:16:02.996629 containerd[1596]: time="2025-04-30T00:16:02.996627339Z" level=info msg="StopPodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" returns successfully" Apr 30 00:16:02.997141 containerd[1596]: time="2025-04-30T00:16:02.997118627Z" level=info msg="RemovePodSandbox for \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\"" Apr 30 00:16:02.997141 containerd[1596]: time="2025-04-30T00:16:02.997139727Z" level=info msg="Forcibly stopping sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\"" Apr 30 00:16:02.997275 containerd[1596]: time="2025-04-30T00:16:02.997206543Z" level=info msg="TearDown network for sandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" successfully" Apr 30 00:16:03.036279 containerd[1596]: time="2025-04-30T00:16:03.036220417Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:03.047931 containerd[1596]: time="2025-04-30T00:16:03.047864574Z" level=error msg="Failed to destroy network for sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:03.048349 containerd[1596]: time="2025-04-30T00:16:03.048317200Z" level=error msg="encountered an error cleaning up failed sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:03.048398 containerd[1596]: time="2025-04-30T00:16:03.048371883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:03.048620 kubelet[2910]: E0430 00:16:03.048582 2910 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:16:03.048683 kubelet[2910]: E0430 00:16:03.048644 2910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:16:03.048683 kubelet[2910]: E0430 00:16:03.048666 2910 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" Apr 30 00:16:03.048734 kubelet[2910]: E0430 00:16:03.048711 2910 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676f79f8bf-gn2h7_calico-apiserver(ad18deb4-55a1-4a60-89a6-511214b20063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" podUID="ad18deb4-55a1-4a60-89a6-511214b20063" Apr 30 00:16:03.079718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce-shm.mount: Deactivated successfully. Apr 30 00:16:03.079933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39-shm.mount: Deactivated successfully. Apr 30 00:16:03.080099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab-shm.mount: Deactivated successfully. Apr 30 00:16:03.093694 containerd[1596]: time="2025-04-30T00:16:03.093627754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:03.093789 containerd[1596]: time="2025-04-30T00:16:03.093704449Z" level=info msg="RemovePodSandbox \"55cc7139a7c9b230210ac1cae44fec94766e977b9532e2f628a107943b82a6f0\" returns successfully" Apr 30 00:16:03.094556 containerd[1596]: time="2025-04-30T00:16:03.094278545Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" Apr 30 00:16:03.094556 containerd[1596]: time="2025-04-30T00:16:03.094414072Z" level=info msg="TearDown network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" successfully" Apr 30 00:16:03.094556 containerd[1596]: time="2025-04-30T00:16:03.094463174Z" level=info msg="StopPodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" returns successfully" Apr 30 00:16:03.094838 containerd[1596]: time="2025-04-30T00:16:03.094804090Z" level=info msg="RemovePodSandbox for \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" Apr 30 00:16:03.094931 containerd[1596]: time="2025-04-30T00:16:03.094841451Z" level=info msg="Forcibly stopping sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\"" Apr 30 00:16:03.095025 containerd[1596]: time="2025-04-30T00:16:03.094970515Z" level=info msg="TearDown network for sandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" successfully" Apr 30 00:16:03.133433 containerd[1596]: time="2025-04-30T00:16:03.133355497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:03.134440 containerd[1596]: time="2025-04-30T00:16:03.134376329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 21.874417485s" Apr 30 00:16:03.134440 containerd[1596]: time="2025-04-30T00:16:03.134429089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 00:16:03.146028 containerd[1596]: time="2025-04-30T00:16:03.145854642Z" level=info msg="CreateContainer within sandbox \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:16:03.234577 containerd[1596]: time="2025-04-30T00:16:03.234485595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:03.234786 containerd[1596]: time="2025-04-30T00:16:03.234602256Z" level=info msg="RemovePodSandbox \"1d1e539de5bd98ed9cb48b1ac6d4de2f2d99d5b7355939f520f8546f6325489e\" returns successfully" Apr 30 00:16:03.235551 containerd[1596]: time="2025-04-30T00:16:03.235244912Z" level=info msg="StopPodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\"" Apr 30 00:16:03.235551 containerd[1596]: time="2025-04-30T00:16:03.235384296Z" level=info msg="TearDown network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" successfully" Apr 30 00:16:03.235551 containerd[1596]: time="2025-04-30T00:16:03.235402129Z" level=info msg="StopPodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" returns successfully" Apr 30 00:16:03.236169 containerd[1596]: time="2025-04-30T00:16:03.236137410Z" level=info msg="RemovePodSandbox for \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\"" Apr 30 00:16:03.236246 containerd[1596]: time="2025-04-30T00:16:03.236168098Z" level=info msg="Forcibly stopping sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\"" Apr 30 00:16:03.236369 containerd[1596]: time="2025-04-30T00:16:03.236295419Z" level=info msg="TearDown network for sandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" successfully" Apr 30 00:16:03.313139 containerd[1596]: time="2025-04-30T00:16:03.313051599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:03.313139 containerd[1596]: time="2025-04-30T00:16:03.313140106Z" level=info msg="RemovePodSandbox \"0e8330eb6a2e4dd8d60481808101ee03f6a0058ca436159bdd152306779a87ee\" returns successfully" Apr 30 00:16:03.313985 containerd[1596]: time="2025-04-30T00:16:03.313856131Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:16:03.314049 containerd[1596]: time="2025-04-30T00:16:03.314029779Z" level=info msg="TearDown network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" successfully" Apr 30 00:16:03.314049 containerd[1596]: time="2025-04-30T00:16:03.314045509Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" returns successfully" Apr 30 00:16:03.314682 containerd[1596]: time="2025-04-30T00:16:03.314646776Z" level=info msg="RemovePodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:16:03.314682 containerd[1596]: time="2025-04-30T00:16:03.314676132Z" level=info msg="Forcibly stopping sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:16:03.314978 containerd[1596]: time="2025-04-30T00:16:03.314770390Z" level=info msg="TearDown network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" successfully" Apr 30 00:16:03.370706 kubelet[2910]: I0430 00:16:03.370647 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce" Apr 30 00:16:03.371805 containerd[1596]: time="2025-04-30T00:16:03.371122802Z" level=info msg="StopPodSandbox for \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\"" Apr 30 00:16:03.371805 containerd[1596]: time="2025-04-30T00:16:03.371445152Z" level=info msg="Ensure that sandbox afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce in task-service has been cleanup successfully" Apr 30 00:16:03.371805 containerd[1596]: time="2025-04-30T00:16:03.371800284Z" level=info msg="TearDown network for sandbox \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\" successfully" Apr 30 00:16:03.372052 containerd[1596]: time="2025-04-30T00:16:03.371814350Z" level=info msg="StopPodSandbox for \"afad30c319d95bb7d5e1dc6877feedaee0c675f137eda4c069812d6ae4f436ce\" returns successfully" Apr 30 00:16:03.372398 containerd[1596]: time="2025-04-30T00:16:03.372342158Z" level=info msg="StopPodSandbox for \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\"" Apr 30 00:16:03.372698 containerd[1596]: time="2025-04-30T00:16:03.372509676Z" level=info msg="TearDown network for sandbox \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\" successfully" Apr 30 00:16:03.372698 containerd[1596]: time="2025-04-30T00:16:03.372546856Z" level=info msg="StopPodSandbox for \"4fde7a899a2d4655dd5fe4033b6541e1f700c8817d4ff02d4d7b468f24aa7aac\" returns successfully" Apr 30 00:16:03.373092 containerd[1596]: time="2025-04-30T00:16:03.373041773Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" Apr 30 00:16:03.373255 containerd[1596]: time="2025-04-30T00:16:03.373156690Z" level=info msg="TearDown network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" successfully" Apr 30 00:16:03.373255 containerd[1596]: time="2025-04-30T00:16:03.373170215Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" returns successfully" Apr 30 00:16:03.373584 containerd[1596]: time="2025-04-30T00:16:03.373546056Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:16:03.373664 containerd[1596]: time="2025-04-30T00:16:03.373645093Z" level=info msg="TearDown network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" successfully" Apr 30 00:16:03.373664 containerd[1596]: time="2025-04-30T00:16:03.373660964Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" returns successfully" Apr 30 00:16:03.374111 kubelet[2910]: I0430 00:16:03.374086 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39" Apr 30 00:16:03.374383 containerd[1596]: time="2025-04-30T00:16:03.374255238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:4,}" Apr 30 00:16:03.374813 containerd[1596]: time="2025-04-30T00:16:03.374770472Z" level=info msg="StopPodSandbox for \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\"" Apr 30 00:16:03.375051 containerd[1596]: time="2025-04-30T00:16:03.375028911Z" level=info msg="Ensure that sandbox 0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39 in task-service has been cleanup successfully" Apr 30 00:16:03.375284 containerd[1596]: time="2025-04-30T00:16:03.375260700Z" level=info msg="TearDown network for sandbox \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\" successfully" Apr 30 00:16:03.375333 containerd[1596]: time="2025-04-30T00:16:03.375277591Z" level=info msg="StopPodSandbox for \"0b741eb5ff6cb78c374a86c1c7bf1b776f2eded98a0fc6847784f18da27bfa39\" returns successfully" Apr 30 00:16:03.375876 containerd[1596]: time="2025-04-30T00:16:03.375697135Z" level=info msg="StopPodSandbox for \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\"" Apr 30 00:16:03.375876 containerd[1596]: time="2025-04-30T00:16:03.375806532Z" level=info msg="TearDown network for sandbox \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\" successfully" Apr 30 00:16:03.375876 containerd[1596]: time="2025-04-30T00:16:03.375818144Z" level=info msg="StopPodSandbox for \"c3ee6a1268dc389b45b2e5e2d2c9217116c5e5ae186da20034b483da976dc036\" returns successfully" Apr 30 00:16:03.376174 containerd[1596]: time="2025-04-30T00:16:03.376126578Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" Apr 30 00:16:03.376262 containerd[1596]: time="2025-04-30T00:16:03.376235825Z" level=info msg="TearDown network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" successfully" Apr 30 00:16:03.376262 containerd[1596]: time="2025-04-30T00:16:03.376258006Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" returns successfully" Apr 30 00:16:03.376398 systemd[1]: run-netns-cni\x2d98d8c56c\x2df375\x2ddee0\x2d21f5\x2d516f416eea08.mount: Deactivated successfully. Apr 30 00:16:03.376820 containerd[1596]: time="2025-04-30T00:16:03.376777680Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:16:03.376952 containerd[1596]: time="2025-04-30T00:16:03.376917314Z" level=info msg="TearDown network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" successfully" Apr 30 00:16:03.376952 containerd[1596]: time="2025-04-30T00:16:03.376942291Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" returns successfully" Apr 30 00:16:03.377423 kubelet[2910]: E0430 00:16:03.377220 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:03.378205 containerd[1596]: time="2025-04-30T00:16:03.378170685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:4,}" Apr 30 00:16:03.378555 kubelet[2910]: I0430 00:16:03.378500 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725" Apr 30 00:16:03.379408 containerd[1596]: time="2025-04-30T00:16:03.379380914Z" level=info msg="StopPodSandbox for \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\"" Apr 30 00:16:03.379585 containerd[1596]: time="2025-04-30T00:16:03.379547349Z" level=info msg="Ensure that sandbox aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725 in task-service has been cleanup successfully" Apr 30 00:16:03.379793 containerd[1596]: time="2025-04-30T00:16:03.379743480Z" level=info msg="TearDown network for sandbox \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\" successfully" Apr 30 00:16:03.379793 containerd[1596]: time="2025-04-30T00:16:03.379776803Z" level=info msg="StopPodSandbox for \"aeff33324668f0215f1b17700383990fea821b92cdde514f1b0fb31b04ae1725\" returns successfully" Apr 30 00:16:03.380291 containerd[1596]: time="2025-04-30T00:16:03.380264225Z" level=info msg="StopPodSandbox for \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\"" Apr 30 00:16:03.380595 containerd[1596]: time="2025-04-30T00:16:03.380534947Z" level=info msg="TearDown network for sandbox \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\" successfully" Apr 30 00:16:03.380595 containerd[1596]: time="2025-04-30T00:16:03.380562269Z" level=info msg="StopPodSandbox for \"8d74899dbb3e8449f4329cac6564a02f562e9826e03df2d5bae1568e3c382c57\" returns successfully" Apr 30 00:16:03.381122 kubelet[2910]: I0430 00:16:03.381095 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55" Apr 30 00:16:03.381207 containerd[1596]: time="2025-04-30T00:16:03.381086249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:4,}" Apr 30 00:16:03.381674 containerd[1596]: time="2025-04-30T00:16:03.381622875Z" level=info msg="StopPodSandbox for \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\"" Apr 30 00:16:03.381962 containerd[1596]: time="2025-04-30T00:16:03.381933232Z" level=info msg="Ensure that sandbox 107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55 in task-service has been cleanup successfully" Apr 30 00:16:03.382187 containerd[1596]: time="2025-04-30T00:16:03.382165402Z" level=info msg="TearDown network for sandbox \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\" successfully" Apr 30 00:16:03.382187 containerd[1596]: time="2025-04-30T00:16:03.382184367Z" level=info msg="StopPodSandbox for \"107483b2ccc63a790ab48af53386fe708cfd78b1e1e0aa9e8a1daa93abfd2b55\" returns successfully" Apr 30 00:16:03.382591 containerd[1596]: time="2025-04-30T00:16:03.382554367Z" level=info msg="StopPodSandbox for \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\"" Apr 30 00:16:03.382696 containerd[1596]: time="2025-04-30T00:16:03.382662702Z" level=info msg="TearDown network for sandbox \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\" successfully" Apr 30 00:16:03.382696 containerd[1596]: time="2025-04-30T00:16:03.382673713Z" level=info msg="StopPodSandbox for \"7ce6bc0aadbc1be480d858f1e51de79b6cd8f91126db96975f0be25b389a33d3\" returns successfully" Apr 30 00:16:03.383221 containerd[1596]: time="2025-04-30T00:16:03.383188567Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" Apr 30 00:16:03.383338 containerd[1596]: time="2025-04-30T00:16:03.383311600Z" level=info msg="TearDown network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" successfully" Apr 30 00:16:03.383338 containerd[1596]: time="2025-04-30T00:16:03.383327960Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" returns successfully" Apr 30 00:16:03.383662 containerd[1596]: time="2025-04-30T00:16:03.383640812Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\"" Apr 30 00:16:03.383734 containerd[1596]: time="2025-04-30T00:16:03.383718089Z" level=info msg="TearDown network for sandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" successfully" Apr 30 00:16:03.383734 containerd[1596]: time="2025-04-30T00:16:03.383731274Z" level=info msg="StopPodSandbox for \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" returns successfully" Apr 30 00:16:03.383839 kubelet[2910]: I0430 00:16:03.383747 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea" Apr 30 00:16:03.384231 kubelet[2910]: E0430 00:16:03.384045 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:03.384366 containerd[1596]: time="2025-04-30T00:16:03.384342250Z" level=info msg="StopPodSandbox for \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\"" Apr 30 00:16:03.384700 containerd[1596]: time="2025-04-30T00:16:03.384415218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:4,}" Apr 30 00:16:03.384700 containerd[1596]: time="2025-04-30T00:16:03.384511439Z" level=info msg="Ensure that sandbox b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea in task-service has been cleanup successfully" Apr 30 00:16:03.384700 containerd[1596]: time="2025-04-30T00:16:03.384676211Z" level=info msg="TearDown network for sandbox \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\" successfully" Apr 30 00:16:03.384700 containerd[1596]: time="2025-04-30T00:16:03.384690708Z" level=info msg="StopPodSandbox for \"b06cacd8726d129f4b126e9532918bb12e9e95d71251dcf136c257e7b1ded8ea\" returns successfully" Apr 30 00:16:03.385081 containerd[1596]: time="2025-04-30T00:16:03.385042825Z" level=info msg="StopPodSandbox for \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\"" Apr 30 00:16:03.385195 containerd[1596]: time="2025-04-30T00:16:03.385149837Z" level=info msg="TearDown network for sandbox \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\" successfully" Apr 30 00:16:03.385195 containerd[1596]: time="2025-04-30T00:16:03.385170667Z" level=info msg="StopPodSandbox for \"62048a283a99f25088f7e8ebb5b51b578acfa1cf1fd8966ff740016808172b34\" returns successfully" Apr 30 00:16:03.385719 containerd[1596]: time="2025-04-30T00:16:03.385690119Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" Apr 30 00:16:03.385983 containerd[1596]: time="2025-04-30T00:16:03.385915125Z" level=info msg="TearDown network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" successfully" Apr 30 00:16:03.385983 containerd[1596]: time="2025-04-30T00:16:03.385925945Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" returns successfully" Apr 30 00:16:03.386106 kubelet[2910]: I0430 00:16:03.386056 2910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab" Apr 30 00:16:03.386176 containerd[1596]: time="2025-04-30T00:16:03.386151793Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:16:03.386280 containerd[1596]: time="2025-04-30T00:16:03.386250779Z" level=info msg="TearDown network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" successfully" Apr 30 00:16:03.387354 containerd[1596]: time="2025-04-30T00:16:03.386272151Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" returns successfully" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.386571045Z" level=info msg="StopPodSandbox for \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\"" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.387700472Z" level=info msg="Ensure that sandbox 943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab in task-service has been cleanup successfully" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.387735188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:4,}" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.388855778Z" level=info msg="TearDown network for sandbox \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\" successfully" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.388872860Z" level=info msg="StopPodSandbox for \"943694d9fce0ccfc0cee1969debaa39b8c641d50a1983722d68dada3386de0ab\" returns successfully" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.389338621Z" level=info msg="StopPodSandbox for \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\"" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.389450743Z" level=info msg="TearDown network for sandbox \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\" successfully" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.389461684Z" level=info msg="StopPodSandbox for \"d4dcaaf4b265802c2c47a7a705656cc28da61cd6a4003c18e7487d54f068a476\" returns successfully" Apr 30 00:16:03.391050 containerd[1596]: time="2025-04-30T00:16:03.390840723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:5,}" Apr 30 00:16:03.430747 containerd[1596]: time="2025-04-30T00:16:03.430627658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:03.430747 containerd[1596]: time="2025-04-30T00:16:03.430697740Z" level=info msg="RemovePodSandbox \"7a1af0634c09e74ab6e6c39fb9b02ba7bf9bdaa8912e9bba97ed3c52091465cd\" returns successfully" Apr 30 00:16:03.431388 containerd[1596]: time="2025-04-30T00:16:03.431330547Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" Apr 30 00:16:03.431572 containerd[1596]: time="2025-04-30T00:16:03.431478658Z" level=info msg="TearDown network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" successfully" Apr 30 00:16:03.431572 containerd[1596]: time="2025-04-30T00:16:03.431491602Z" level=info msg="StopPodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" returns successfully" Apr 30 00:16:03.431999 containerd[1596]: time="2025-04-30T00:16:03.431974726Z" level=info msg="RemovePodSandbox for \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" Apr 30 00:16:03.433332 containerd[1596]: time="2025-04-30T00:16:03.432090534Z" level=info msg="Forcibly stopping sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\"" Apr 30 00:16:03.433332 containerd[1596]: time="2025-04-30T00:16:03.432166840Z" level=info msg="TearDown network for sandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" successfully" Apr 30 00:16:03.438928 containerd[1596]: time="2025-04-30T00:16:03.438874388Z" level=info msg="CreateContainer within sandbox \"51c7b42a5ad05a7b32cdb12845a2dc59926862365421f75b7267f89293b43c36\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"863e21147538d3e5792312e55c3be09daaebc786977c0943ac2825ce2a6ac68b\"" Apr 30 00:16:03.439454 containerd[1596]: time="2025-04-30T00:16:03.439413087Z" level=info msg="StartContainer for \"863e21147538d3e5792312e55c3be09daaebc786977c0943ac2825ce2a6ac68b\"" Apr 30 00:16:03.654150 containerd[1596]: time="2025-04-30T00:16:03.653991595Z" level=info msg="StartContainer for \"863e21147538d3e5792312e55c3be09daaebc786977c0943ac2825ce2a6ac68b\" returns successfully" Apr 30 00:16:03.694416 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:16:03.694563 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:16:04.082685 systemd[1]: run-netns-cni\x2dc50dfe51\x2de1f7\x2dbd95\x2d5312\x2d3a73d2185c72.mount: Deactivated successfully. Apr 30 00:16:04.082950 systemd[1]: run-netns-cni\x2de4011eb6\x2de0a4\x2dc2cc\x2d41a0\x2d493c82d81385.mount: Deactivated successfully. Apr 30 00:16:04.085068 systemd[1]: run-netns-cni\x2d889c64f5\x2d2668\x2dc46d\x2d899c\x2d1c5188e16123.mount: Deactivated successfully. Apr 30 00:16:04.085259 systemd[1]: run-netns-cni\x2da38b5652\x2d0f90\x2df9c3\x2d4e66\x2dd897bfebf0db.mount: Deactivated successfully. Apr 30 00:16:04.085690 systemd[1]: run-netns-cni\x2d4fe165f3\x2db734\x2d1b5a\x2d6a88\x2de282beb7e1d1.mount: Deactivated successfully. Apr 30 00:16:04.142708 containerd[1596]: time="2025-04-30T00:16:04.142611754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:04.143320 containerd[1596]: time="2025-04-30T00:16:04.142730319Z" level=info msg="RemovePodSandbox \"b1530ec446746a3796cc8d0e7ab5970ee5b7de585ea43299d2ccf65267a5b282\" returns successfully" Apr 30 00:16:04.143939 containerd[1596]: time="2025-04-30T00:16:04.143710214Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:16:04.143939 containerd[1596]: time="2025-04-30T00:16:04.143835962Z" level=info msg="TearDown network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" successfully" Apr 30 00:16:04.143939 containerd[1596]: time="2025-04-30T00:16:04.143850881Z" level=info msg="StopPodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" returns successfully" Apr 30 00:16:04.144265 containerd[1596]: time="2025-04-30T00:16:04.144239557Z" level=info msg="RemovePodSandbox for \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:16:04.144342 containerd[1596]: time="2025-04-30T00:16:04.144266889Z" level=info msg="Forcibly stopping sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\"" Apr 30 00:16:04.144402 containerd[1596]: time="2025-04-30T00:16:04.144350528Z" level=info msg="TearDown network for sandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" successfully" Apr 30 00:16:04.397483 kubelet[2910]: E0430 00:16:04.396932 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.798264208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.798357886Z" level=info msg="RemovePodSandbox \"91cc6602af24053dfec766c7f7c6e3c6d08f6813a72325097bb08174669ccf41\" returns successfully" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.798960447Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.799100773Z" level=info msg="TearDown network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" successfully" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.799114900Z" level=info msg="StopPodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" returns successfully" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.799370444Z" level=info msg="RemovePodSandbox for \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.799394599Z" level=info msg="Forcibly stopping sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\"" Apr 30 00:16:04.803757 containerd[1596]: time="2025-04-30T00:16:04.802470957Z" level=info msg="TearDown network for sandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" successfully" Apr 30 00:16:05.416928 kubelet[2910]: E0430 00:16:05.414867 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:05.926012 containerd[1596]: time="2025-04-30T00:16:05.924346409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:05.926012 containerd[1596]: time="2025-04-30T00:16:05.924448082Z" level=info msg="RemovePodSandbox \"53ac58f6212221f4400a252f4f555851a132878fb4aca9e91619559eb03ef5e2\" returns successfully" Apr 30 00:16:05.939033 containerd[1596]: time="2025-04-30T00:16:05.934000548Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:16:05.939033 containerd[1596]: time="2025-04-30T00:16:05.934201369Z" level=info msg="TearDown network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" successfully" Apr 30 00:16:05.939033 containerd[1596]: time="2025-04-30T00:16:05.934219193Z" level=info msg="StopPodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" returns successfully" Apr 30 00:16:05.939033 containerd[1596]: time="2025-04-30T00:16:05.936353942Z" level=info msg="RemovePodSandbox for \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:16:05.939033 containerd[1596]: time="2025-04-30T00:16:05.936413404Z" level=info msg="Forcibly stopping sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\"" Apr 30 00:16:05.939033 containerd[1596]: time="2025-04-30T00:16:05.936565592Z" level=info msg="TearDown network for sandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" successfully" Apr 30 00:16:06.006420 containerd[1596]: time="2025-04-30T00:16:06.006143960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:06.006420 containerd[1596]: time="2025-04-30T00:16:06.006247487Z" level=info msg="RemovePodSandbox \"48fafa5d99ece2ec3d9902e257be3e1644dfbe27274e4a0d99c6923954a5cdca\" returns successfully" Apr 30 00:16:06.021746 containerd[1596]: time="2025-04-30T00:16:06.021675932Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" Apr 30 00:16:06.022395 containerd[1596]: time="2025-04-30T00:16:06.022208062Z" level=info msg="TearDown network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" successfully" Apr 30 00:16:06.022395 containerd[1596]: time="2025-04-30T00:16:06.022230735Z" level=info msg="StopPodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" returns successfully" Apr 30 00:16:06.024714 containerd[1596]: time="2025-04-30T00:16:06.024241753Z" level=info msg="RemovePodSandbox for \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" Apr 30 00:16:06.024714 containerd[1596]: time="2025-04-30T00:16:06.024329580Z" level=info msg="Forcibly stopping sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\"" Apr 30 00:16:06.024714 containerd[1596]: time="2025-04-30T00:16:06.024526795Z" level=info msg="TearDown network for sandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" successfully" Apr 30 00:16:06.074222 containerd[1596]: time="2025-04-30T00:16:06.073929609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:06.074222 containerd[1596]: time="2025-04-30T00:16:06.074030902Z" level=info msg="RemovePodSandbox \"69bd431c124455ffa9a175c19285c53f651098f7861e9b163cbf42de1d3cfa38\" returns successfully" Apr 30 00:16:06.076161 containerd[1596]: time="2025-04-30T00:16:06.076089932Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:16:06.076333 containerd[1596]: time="2025-04-30T00:16:06.076300271Z" level=info msg="TearDown network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" successfully" Apr 30 00:16:06.076382 containerd[1596]: time="2025-04-30T00:16:06.076327854Z" level=info msg="StopPodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" returns successfully" Apr 30 00:16:06.078065 containerd[1596]: time="2025-04-30T00:16:06.078023213Z" level=info msg="RemovePodSandbox for \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:16:06.078065 containerd[1596]: time="2025-04-30T00:16:06.078064762Z" level=info msg="Forcibly stopping sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\"" Apr 30 00:16:06.078237 containerd[1596]: time="2025-04-30T00:16:06.078170642Z" level=info msg="TearDown network for sandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" successfully" Apr 30 00:16:06.170127 systemd-networkd[1264]: cali85c74dd1047: Link UP Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.173871908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.173959535Z" level=info msg="RemovePodSandbox \"ad7d846f8b4a3e050c3e2048072434019299c08f52d1e5456eaf0e1a82043f12\" returns successfully" Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.177516558Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.177678146Z" level=info msg="TearDown network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" successfully" Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.177698083Z" level=info msg="StopPodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" returns successfully" Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.179385778Z" level=info msg="RemovePodSandbox for \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.179550100Z" level=info msg="Forcibly stopping sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\"" Apr 30 00:16:06.180521 containerd[1596]: time="2025-04-30T00:16:06.180197960Z" level=info msg="TearDown network for sandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" successfully" Apr 30 00:16:06.170439 systemd-networkd[1264]: cali85c74dd1047: Gained carrier Apr 30 00:16:06.213960 kubelet[2910]: I0430 00:16:06.213844 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gbkxd" podStartSLOduration=4.056553317 podStartE2EDuration="38.213820131s" podCreationTimestamp="2025-04-30 00:15:28 +0000 UTC" firstStartedPulling="2025-04-30 00:15:28.978165262 +0000 UTC m=+30.033779782" lastFinishedPulling="2025-04-30 00:16:03.135432076 +0000 UTC m=+64.191046596" observedRunningTime="2025-04-30 00:16:04.487614846 +0000 UTC m=+65.543229366" watchObservedRunningTime="2025-04-30 00:16:06.213820131 +0000 UTC m=+67.269434661" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.674 [INFO][4765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.701 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0 calico-kube-controllers-6fd77dd9ff- calico-system 5ad90aba-5f5d-4618-814e-e0df441b2efc 779 0 2025-04-30 00:15:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fd77dd9ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fd77dd9ff-bqkxv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85c74dd1047 [] []}} ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.701 [INFO][4765] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.809 [INFO][4812] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" HandleID="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Workload="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.874 [INFO][4812] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" HandleID="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Workload="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384a10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fd77dd9ff-bqkxv", "timestamp":"2025-04-30 00:16:05.809382441 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.874 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.874 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.875 [INFO][4812] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.880 [INFO][4812] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.924 [INFO][4812] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.966 [INFO][4812] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.974 [INFO][4812] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.982 [INFO][4812] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.982 [INFO][4812] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:05.989 [INFO][4812] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88 Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:06.015 [INFO][4812] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:06.127 [INFO][4812] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:06.128 [INFO][4812] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" host="localhost" Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:06.128 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:16:06.218999 containerd[1596]: 2025-04-30 00:16:06.128 [INFO][4812] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" HandleID="k8s-pod-network.9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Workload="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.220335 containerd[1596]: 2025-04-30 00:16:06.136 [INFO][4765] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0", GenerateName:"calico-kube-controllers-6fd77dd9ff-", Namespace:"calico-system", SelfLink:"", UID:"5ad90aba-5f5d-4618-814e-e0df441b2efc", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd77dd9ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fd77dd9ff-bqkxv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85c74dd1047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:06.220335 containerd[1596]: 2025-04-30 00:16:06.136 [INFO][4765] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.220335 containerd[1596]: 2025-04-30 00:16:06.136 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85c74dd1047 ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.220335 containerd[1596]: 2025-04-30 00:16:06.171 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.220335 containerd[1596]: 2025-04-30 00:16:06.172 [INFO][4765] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0", GenerateName:"calico-kube-controllers-6fd77dd9ff-", Namespace:"calico-system", SelfLink:"", UID:"5ad90aba-5f5d-4618-814e-e0df441b2efc", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd77dd9ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88", Pod:"calico-kube-controllers-6fd77dd9ff-bqkxv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85c74dd1047", MAC:"d6:0d:6f:72:60:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:06.220335 containerd[1596]: 2025-04-30 00:16:06.215 [INFO][4765] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88" Namespace="calico-system" Pod="calico-kube-controllers-6fd77dd9ff-bqkxv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd77dd9ff--bqkxv-eth0" Apr 30 00:16:06.322105 containerd[1596]: time="2025-04-30T00:16:06.322041490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:16:06.322411 containerd[1596]: time="2025-04-30T00:16:06.322390252Z" level=info msg="RemovePodSandbox \"72d1018bf29795d5039b8874ef7748a4d137e7af6307e93083fe2bdef08ad77f\" returns successfully" Apr 30 00:16:06.370940 kernel: bpftool[5002]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:16:06.465394 containerd[1596]: time="2025-04-30T00:16:06.465099234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:06.465394 containerd[1596]: time="2025-04-30T00:16:06.465176451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:06.465394 containerd[1596]: time="2025-04-30T00:16:06.465191198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:06.465646 containerd[1596]: time="2025-04-30T00:16:06.465522658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:06.492665 systemd-networkd[1264]: caliedb358ebe1f: Link UP Apr 30 00:16:06.501057 systemd-networkd[1264]: caliedb358ebe1f: Gained carrier Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:05.258 [INFO][4716] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:05.386 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0 coredns-7db6d8ff4d- kube-system 8d8fc62d-6f2c-4db1-b700-84ab75075a8b 778 0 2025-04-30 00:15:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-2nh9s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliedb358ebe1f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:05.386 [INFO][4716] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:05.809 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" HandleID="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Workload="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:05.875 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" HandleID="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Workload="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000428680), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-2nh9s", "timestamp":"2025-04-30 00:16:05.809466952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:05.875 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.128 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.128 [INFO][4733] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.131 [INFO][4733] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.139 [INFO][4733] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.150 [INFO][4733] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.155 [INFO][4733] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.160 [INFO][4733] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.160 [INFO][4733] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.183 [INFO][4733] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892 Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.279 [INFO][4733] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.434 [INFO][4733] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.435 [INFO][4733] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" host="localhost" Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.435 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:16:06.590180 containerd[1596]: 2025-04-30 00:16:06.435 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" HandleID="k8s-pod-network.736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Workload="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.591014 containerd[1596]: 2025-04-30 00:16:06.450 [INFO][4716] cni-plugin/k8s.go 386: Populated endpoint ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8d8fc62d-6f2c-4db1-b700-84ab75075a8b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-2nh9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedb358ebe1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:06.591014 containerd[1596]: 2025-04-30 00:16:06.450 [INFO][4716] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.591014 containerd[1596]: 2025-04-30 00:16:06.450 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedb358ebe1f ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.591014 containerd[1596]: 2025-04-30 00:16:06.554 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.591014 containerd[1596]: 2025-04-30 00:16:06.556 [INFO][4716] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8d8fc62d-6f2c-4db1-b700-84ab75075a8b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892", Pod:"coredns-7db6d8ff4d-2nh9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedb358ebe1f", MAC:"6a:d5:b2:76:02:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:06.591014 containerd[1596]: 2025-04-30 00:16:06.586 [INFO][4716] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nh9s" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nh9s-eth0" Apr 30 00:16:06.632414 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:16:06.678231 containerd[1596]: time="2025-04-30T00:16:06.671650679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:06.678231 containerd[1596]: time="2025-04-30T00:16:06.673141731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:06.678231 containerd[1596]: time="2025-04-30T00:16:06.673203839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:06.678231 containerd[1596]: time="2025-04-30T00:16:06.673386997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:06.718542 systemd-networkd[1264]: cali80347021050: Link UP Apr 30 00:16:06.720748 systemd-networkd[1264]: cali80347021050: Gained carrier Apr 30 00:16:06.763930 containerd[1596]: time="2025-04-30T00:16:06.763824768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd77dd9ff-bqkxv,Uid:5ad90aba-5f5d-4618-814e-e0df441b2efc,Namespace:calico-system,Attempt:4,} returns sandbox id \"9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88\"" Apr 30 00:16:06.768864 containerd[1596]: time="2025-04-30T00:16:06.768817769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:16:06.769195 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:05.709 [INFO][4777] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:05.777 [INFO][4777] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0 coredns-7db6d8ff4d- kube-system 3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed 772 0 2025-04-30 00:15:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-km5z4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80347021050 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:05.777 [INFO][4777] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:05.861 [INFO][4832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" HandleID="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Workload="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:05.881 [INFO][4832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" HandleID="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Workload="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004eeb80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-km5z4", "timestamp":"2025-04-30 00:16:05.861725239 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:05.881 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.435 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.441 [INFO][4832] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.451 [INFO][4832] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.489 [INFO][4832] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.554 [INFO][4832] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.565 [INFO][4832] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.583 [INFO][4832] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.583 [INFO][4832] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.588 [INFO][4832] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7 Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.621 [INFO][4832] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.697 [INFO][4832] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.697 [INFO][4832] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" host="localhost" Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.697 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:16:06.792727 containerd[1596]: 2025-04-30 00:16:06.697 [INFO][4832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" HandleID="k8s-pod-network.6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Workload="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.797601 containerd[1596]: 2025-04-30 00:16:06.712 [INFO][4777] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-km5z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80347021050", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:06.797601 containerd[1596]: 2025-04-30 00:16:06.712 [INFO][4777] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.797601 containerd[1596]: 2025-04-30 00:16:06.712 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80347021050 ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.797601 containerd[1596]: 2025-04-30 00:16:06.726 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.797601 containerd[1596]: 2025-04-30 00:16:06.726 [INFO][4777] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7", Pod:"coredns-7db6d8ff4d-km5z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80347021050", MAC:"a6:93:b1:86:a8:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:06.797601 containerd[1596]: 2025-04-30 00:16:06.780 [INFO][4777] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km5z4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--km5z4-eth0" Apr 30 00:16:06.827711 containerd[1596]: time="2025-04-30T00:16:06.827655578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nh9s,Uid:8d8fc62d-6f2c-4db1-b700-84ab75075a8b,Namespace:kube-system,Attempt:4,} returns sandbox id \"736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892\"" Apr 30 00:16:06.829640 kubelet[2910]: E0430 00:16:06.829057 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:06.836060 containerd[1596]: time="2025-04-30T00:16:06.835988341Z" level=info msg="CreateContainer within sandbox \"736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:16:06.886484 containerd[1596]: time="2025-04-30T00:16:06.884832294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:06.886484 containerd[1596]: time="2025-04-30T00:16:06.884958494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:06.886484 containerd[1596]: time="2025-04-30T00:16:06.884979654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:06.886484 containerd[1596]: time="2025-04-30T00:16:06.886322494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:06.961633 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:16:07.024773 containerd[1596]: time="2025-04-30T00:16:07.024717687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km5z4,Uid:3e2d70e1-9e2f-4cfd-96d3-6ca6e30433ed,Namespace:kube-system,Attempt:4,} returns sandbox id \"6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7\"" Apr 30 00:16:07.030916 kubelet[2910]: E0430 00:16:07.025929 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:07.031075 containerd[1596]: time="2025-04-30T00:16:07.031026039Z" level=info msg="CreateContainer within sandbox \"6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:16:07.090899 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:58620.service - OpenSSH per-connection server daemon (10.0.0.1:58620). Apr 30 00:16:07.111521 systemd-networkd[1264]: vxlan.calico: Link UP Apr 30 00:16:07.111538 systemd-networkd[1264]: vxlan.calico: Gained carrier Apr 30 00:16:07.166014 systemd-networkd[1264]: cali12a15827fbb: Link UP Apr 30 00:16:07.166415 systemd-networkd[1264]: cali12a15827fbb: Gained carrier Apr 30 00:16:07.326328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247076104.mount: Deactivated successfully. Apr 30 00:16:07.331478 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 58620 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:07.340346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617410054.mount: Deactivated successfully. Apr 30 00:16:07.357339 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:05.742 [INFO][4794] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:05.823 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0 calico-apiserver-676f79f8bf- calico-apiserver c27777f5-4b50-4a4b-8544-5463d251461f 775 0 2025-04-30 00:15:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:676f79f8bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-676f79f8bf-5c5xv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12a15827fbb [] []}} ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:05.823 [INFO][4794] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:05.895 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" HandleID="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Workload="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:05.953 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" HandleID="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Workload="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-676f79f8bf-5c5xv", "timestamp":"2025-04-30 00:16:05.895837142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:05.954 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.701 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.701 [INFO][4838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.725 [INFO][4838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.754 [INFO][4838] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.883 [INFO][4838] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.896 [INFO][4838] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.913 [INFO][4838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.913 [INFO][4838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.920 [INFO][4838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4 Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:06.954 [INFO][4838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:07.123 [INFO][4838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:07.123 [INFO][4838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" host="localhost" Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:07.123 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:16:07.374186 containerd[1596]: 2025-04-30 00:16:07.123 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" HandleID="k8s-pod-network.62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Workload="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.375013 containerd[1596]: 2025-04-30 00:16:07.135 [INFO][4794] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0", GenerateName:"calico-apiserver-676f79f8bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c27777f5-4b50-4a4b-8544-5463d251461f", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676f79f8bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-676f79f8bf-5c5xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12a15827fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:07.375013 containerd[1596]: 2025-04-30 00:16:07.135 [INFO][4794] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.375013 containerd[1596]: 2025-04-30 00:16:07.135 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12a15827fbb ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.375013 containerd[1596]: 2025-04-30 00:16:07.163 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.375013 containerd[1596]: 2025-04-30 00:16:07.174 [INFO][4794] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0", GenerateName:"calico-apiserver-676f79f8bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c27777f5-4b50-4a4b-8544-5463d251461f", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676f79f8bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4", Pod:"calico-apiserver-676f79f8bf-5c5xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12a15827fbb", MAC:"4a:21:c7:4c:1a:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:07.375013 containerd[1596]: 2025-04-30 00:16:07.371 [INFO][4794] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-5c5xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--5c5xv-eth0" Apr 30 00:16:07.405666 systemd-logind[1582]: New session 19 of user core. Apr 30 00:16:07.412606 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:16:07.436265 systemd-networkd[1264]: cali85c74dd1047: Gained IPv6LL Apr 30 00:16:07.530924 systemd-networkd[1264]: cali513685ecdc8: Link UP Apr 30 00:16:07.532060 systemd-networkd[1264]: cali513685ecdc8: Gained carrier Apr 30 00:16:07.754499 containerd[1596]: time="2025-04-30T00:16:07.754436557Z" level=info msg="CreateContainer within sandbox \"736dd36e16b487df4b4b65be5fc0a29e9b3868039f18a204dc42c70c3d2f8892\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ebaefc79d8304c47507ead63b28bca3ea5e91cc027ed62f54bc7711a1e43d4e7\"" Apr 30 00:16:07.755074 containerd[1596]: time="2025-04-30T00:16:07.755037660Z" level=info msg="StartContainer for \"ebaefc79d8304c47507ead63b28bca3ea5e91cc027ed62f54bc7711a1e43d4e7\"" Apr 30 00:16:07.756102 systemd-networkd[1264]: caliedb358ebe1f: Gained IPv6LL Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:05.864 [INFO][4821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:05.896 [INFO][4821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4t95w-eth0 csi-node-driver- calico-system 60f4275e-2eec-4a29-a8cc-8e6f60dbe335 648 0 2025-04-30 00:15:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4t95w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali513685ecdc8 [] []}} ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:05.897 [INFO][4821] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:06.068 [INFO][4952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" HandleID="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Workload="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:06.135 [INFO][4952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" HandleID="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Workload="localhost-k8s-csi--node--driver--4t95w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000621c40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4t95w", "timestamp":"2025-04-30 00:16:06.066870095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:06.135 [INFO][4952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.123 [INFO][4952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.123 [INFO][4952] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.147 [INFO][4952] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.376 [INFO][4952] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.408 [INFO][4952] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.411 [INFO][4952] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.415 [INFO][4952] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.415 [INFO][4952] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.419 [INFO][4952] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40 Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.435 [INFO][4952] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.519 [INFO][4952] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.519 [INFO][4952] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" host="localhost" Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.519 [INFO][4952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:16:07.811302 containerd[1596]: 2025-04-30 00:16:07.519 [INFO][4952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" HandleID="k8s-pod-network.ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Workload="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.813979 containerd[1596]: 2025-04-30 00:16:07.525 [INFO][4821] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4t95w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60f4275e-2eec-4a29-a8cc-8e6f60dbe335", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4t95w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513685ecdc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:07.813979 containerd[1596]: 2025-04-30 00:16:07.525 [INFO][4821] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.813979 containerd[1596]: 2025-04-30 00:16:07.525 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali513685ecdc8 ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.813979 containerd[1596]: 2025-04-30 00:16:07.529 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.813979 containerd[1596]: 2025-04-30 00:16:07.533 [INFO][4821] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4t95w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"60f4275e-2eec-4a29-a8cc-8e6f60dbe335", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40", Pod:"csi-node-driver-4t95w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513685ecdc8", MAC:"26:49:d8:93:76:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:07.813979 containerd[1596]: 2025-04-30 00:16:07.804 [INFO][4821] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40" Namespace="calico-system" Pod="csi-node-driver-4t95w" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t95w-eth0" Apr 30 00:16:07.817961 sshd[5232]: Connection closed by 10.0.0.1 port 58620 Apr 30 00:16:07.818657 containerd[1596]: time="2025-04-30T00:16:07.818411252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:07.818657 containerd[1596]: time="2025-04-30T00:16:07.818501083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:07.818657 containerd[1596]: time="2025-04-30T00:16:07.818518306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:07.818657 containerd[1596]: time="2025-04-30T00:16:07.818622594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:07.818813 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:07.823399 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:58620.service: Deactivated successfully. Apr 30 00:16:07.826386 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:16:07.827166 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:16:07.828528 systemd-logind[1582]: Removed session 19. Apr 30 00:16:07.843846 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:16:07.870875 containerd[1596]: time="2025-04-30T00:16:07.870835801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-5c5xv,Uid:c27777f5-4b50-4a4b-8544-5463d251461f,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4\"" Apr 30 00:16:07.948024 systemd-networkd[1264]: cali80347021050: Gained IPv6LL Apr 30 00:16:08.089967 containerd[1596]: time="2025-04-30T00:16:08.089827944Z" level=info msg="StartContainer for \"ebaefc79d8304c47507ead63b28bca3ea5e91cc027ed62f54bc7711a1e43d4e7\" returns successfully" Apr 30 00:16:08.462477 containerd[1596]: time="2025-04-30T00:16:08.456139956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:08.462477 containerd[1596]: time="2025-04-30T00:16:08.456231740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:08.462477 containerd[1596]: time="2025-04-30T00:16:08.456247701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:08.462477 containerd[1596]: time="2025-04-30T00:16:08.456384871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:08.468710 kubelet[2910]: E0430 00:16:08.468673 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:08.485317 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:16:08.503970 systemd-networkd[1264]: calie7c6fef1a5b: Link UP Apr 30 00:16:08.504202 systemd-networkd[1264]: calie7c6fef1a5b: Gained carrier Apr 30 00:16:08.509081 containerd[1596]: time="2025-04-30T00:16:08.509041504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t95w,Uid:60f4275e-2eec-4a29-a8cc-8e6f60dbe335,Namespace:calico-system,Attempt:5,} returns sandbox id \"ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40\"" Apr 30 00:16:08.716334 systemd-networkd[1264]: cali12a15827fbb: Gained IPv6LL Apr 30 00:16:08.761185 containerd[1596]: time="2025-04-30T00:16:08.761107340Z" level=info msg="CreateContainer within sandbox \"6348eddab3a72cdaa07fbfd72157c1436c0258809b845526b5c54ca0dbaeabf7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f195de480f94aa72dea9f48eb137a757106ab7bb7504d3ecd3c8401025ef07d\"" Apr 30 00:16:08.761862 containerd[1596]: time="2025-04-30T00:16:08.761832500Z" level=info msg="StartContainer for \"3f195de480f94aa72dea9f48eb137a757106ab7bb7504d3ecd3c8401025ef07d\"" Apr 30 00:16:08.784783 kubelet[2910]: I0430 00:16:08.784128 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2nh9s" podStartSLOduration=55.784102142 podStartE2EDuration="55.784102142s" podCreationTimestamp="2025-04-30 00:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:16:08.783623852 +0000 UTC m=+69.839238392" watchObservedRunningTime="2025-04-30 00:16:08.784102142 +0000 UTC m=+69.839716662" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:06.051 [INFO][4873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:06.139 [INFO][4873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0 calico-apiserver-676f79f8bf- calico-apiserver ad18deb4-55a1-4a60-89a6-511214b20063 781 0 2025-04-30 00:15:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:676f79f8bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-676f79f8bf-gn2h7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7c6fef1a5b [] []}} ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:06.139 [INFO][4873] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:06.224 [INFO][4972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" HandleID="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Workload="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:06.440 [INFO][4972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" HandleID="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Workload="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00016e020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-676f79f8bf-gn2h7", "timestamp":"2025-04-30 00:16:06.224479541 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:06.440 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:07.519 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:07.519 [INFO][4972] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:07.523 [INFO][4972] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:07.544 [INFO][4972] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:07.861 [INFO][4972] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:07.902 [INFO][4972] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.165 [INFO][4972] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.165 [INFO][4972] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.201 [INFO][4972] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240 Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.452 [INFO][4972] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.491 [INFO][4972] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.492 [INFO][4972] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" host="localhost" Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.492 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:16:08.828141 containerd[1596]: 2025-04-30 00:16:08.492 [INFO][4972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" HandleID="k8s-pod-network.49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Workload="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.829593 containerd[1596]: 2025-04-30 00:16:08.496 [INFO][4873] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0", GenerateName:"calico-apiserver-676f79f8bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad18deb4-55a1-4a60-89a6-511214b20063", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676f79f8bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-676f79f8bf-gn2h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7c6fef1a5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:08.829593 containerd[1596]: 2025-04-30 00:16:08.498 [INFO][4873] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.829593 containerd[1596]: 2025-04-30 00:16:08.498 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7c6fef1a5b ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.829593 containerd[1596]: 2025-04-30 00:16:08.504 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.829593 containerd[1596]: 2025-04-30 00:16:08.505 [INFO][4873] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0", GenerateName:"calico-apiserver-676f79f8bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad18deb4-55a1-4a60-89a6-511214b20063", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676f79f8bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240", Pod:"calico-apiserver-676f79f8bf-gn2h7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7c6fef1a5b", MAC:"3e:a6:89:29:42:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:16:08.829593 containerd[1596]: 2025-04-30 00:16:08.823 [INFO][4873] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240" Namespace="calico-apiserver" Pod="calico-apiserver-676f79f8bf-gn2h7" WorkloadEndpoint="localhost-k8s-calico--apiserver--676f79f8bf--gn2h7-eth0" Apr 30 00:16:08.845062 systemd-networkd[1264]: cali513685ecdc8: Gained IPv6LL Apr 30 00:16:09.105738 systemd-networkd[1264]: vxlan.calico: Gained IPv6LL Apr 30 00:16:09.161566 containerd[1596]: time="2025-04-30T00:16:09.161400767Z" level=info msg="StartContainer for \"3f195de480f94aa72dea9f48eb137a757106ab7bb7504d3ecd3c8401025ef07d\" returns successfully" Apr 30 00:16:09.473718 kubelet[2910]: E0430 00:16:09.473678 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:09.473718 kubelet[2910]: E0430 00:16:09.473718 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:09.706286 kubelet[2910]: I0430 00:16:09.706018 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-km5z4" podStartSLOduration=56.705994915 podStartE2EDuration="56.705994915s" podCreationTimestamp="2025-04-30 00:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:16:09.705660137 +0000 UTC m=+70.761274657" watchObservedRunningTime="2025-04-30 00:16:09.705994915 +0000 UTC m=+70.761609435" Apr 30 00:16:09.750826 containerd[1596]: time="2025-04-30T00:16:09.749600182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:09.751782 containerd[1596]: time="2025-04-30T00:16:09.750987455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:09.751782 containerd[1596]: time="2025-04-30T00:16:09.751066826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:09.752057 containerd[1596]: time="2025-04-30T00:16:09.751992589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:09.786537 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:16:09.805392 systemd-networkd[1264]: calie7c6fef1a5b: Gained IPv6LL Apr 30 00:16:09.823798 containerd[1596]: time="2025-04-30T00:16:09.823749923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676f79f8bf-gn2h7,Uid:ad18deb4-55a1-4a60-89a6-511214b20063,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240\"" Apr 30 00:16:10.478144 kubelet[2910]: E0430 00:16:10.478106 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:11.480164 kubelet[2910]: E0430 00:16:11.480129 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:11.493383 kubelet[2910]: E0430 00:16:11.493305 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:12.482354 kubelet[2910]: E0430 00:16:12.482316 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:12.831112 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:58626.service - OpenSSH per-connection server daemon (10.0.0.1:58626). Apr 30 00:16:12.877108 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 58626 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:12.878675 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:12.883223 systemd-logind[1582]: New session 20 of user core. Apr 30 00:16:12.895176 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:16:13.070402 containerd[1596]: time="2025-04-30T00:16:13.068778366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:13.090442 sshd[5539]: Connection closed by 10.0.0.1 port 58626 Apr 30 00:16:13.092469 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:13.097634 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:58626.service: Deactivated successfully. Apr 30 00:16:13.100571 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:16:13.100592 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:16:13.101812 systemd-logind[1582]: Removed session 20. Apr 30 00:16:13.767808 containerd[1596]: time="2025-04-30T00:16:13.767736468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 00:16:13.828571 containerd[1596]: time="2025-04-30T00:16:13.828494081Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:13.845097 containerd[1596]: time="2025-04-30T00:16:13.845041295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:13.845747 containerd[1596]: time="2025-04-30T00:16:13.845716666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 7.076632852s" Apr 30 00:16:13.845747 containerd[1596]: time="2025-04-30T00:16:13.845745572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 00:16:13.853750 containerd[1596]: time="2025-04-30T00:16:13.853534310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:16:13.861650 containerd[1596]: time="2025-04-30T00:16:13.861422630Z" level=info msg="CreateContainer within sandbox \"9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:16:14.494827 containerd[1596]: time="2025-04-30T00:16:14.494773127Z" level=info msg="CreateContainer within sandbox \"9c72b62629d2ef102254e9e617fee6896bbbb5a7e98124ace2103ecac3c73c88\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"35266a092232aa388a6be893c4b2a880ee402cfffcd06de765d5f53a36da53be\"" Apr 30 00:16:14.495506 containerd[1596]: time="2025-04-30T00:16:14.495473799Z" level=info msg="StartContainer for \"35266a092232aa388a6be893c4b2a880ee402cfffcd06de765d5f53a36da53be\"" Apr 30 00:16:14.781405 containerd[1596]: time="2025-04-30T00:16:14.781273459Z" level=info msg="StartContainer for \"35266a092232aa388a6be893c4b2a880ee402cfffcd06de765d5f53a36da53be\" returns successfully" Apr 30 00:16:15.822652 kubelet[2910]: I0430 00:16:15.822399 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fd77dd9ff-bqkxv" podStartSLOduration=40.736985546 podStartE2EDuration="47.822376442s" podCreationTimestamp="2025-04-30 00:15:28 +0000 UTC" firstStartedPulling="2025-04-30 00:16:06.767709795 +0000 UTC m=+67.823324315" lastFinishedPulling="2025-04-30 00:16:13.853100691 +0000 UTC m=+74.908715211" observedRunningTime="2025-04-30 00:16:15.59680446 +0000 UTC m=+76.652418980" watchObservedRunningTime="2025-04-30 00:16:15.822376442 +0000 UTC m=+76.877990962" Apr 30 00:16:18.063762 kubelet[2910]: E0430 00:16:18.063703 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:18.104169 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). Apr 30 00:16:18.150262 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:18.152549 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:18.156986 systemd-logind[1582]: New session 21 of user core. Apr 30 00:16:18.171168 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:16:18.337084 sshd[5623]: Connection closed by 10.0.0.1 port 53794 Apr 30 00:16:18.339199 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:18.345210 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:53794.service: Deactivated successfully. Apr 30 00:16:18.348625 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:16:18.348653 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:16:18.350371 systemd-logind[1582]: Removed session 21. Apr 30 00:16:19.976210 containerd[1596]: time="2025-04-30T00:16:19.974988344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:20.050017 containerd[1596]: time="2025-04-30T00:16:20.049597559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 00:16:20.091419 containerd[1596]: time="2025-04-30T00:16:20.091332556Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:20.132251 containerd[1596]: time="2025-04-30T00:16:20.132121111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:20.134480 containerd[1596]: time="2025-04-30T00:16:20.133647999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 6.280051649s" Apr 30 00:16:20.134480 containerd[1596]: time="2025-04-30T00:16:20.133746898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 00:16:20.137920 containerd[1596]: time="2025-04-30T00:16:20.137565537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:16:20.139414 containerd[1596]: time="2025-04-30T00:16:20.139368887Z" level=info msg="CreateContainer within sandbox \"62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:16:20.939873 containerd[1596]: time="2025-04-30T00:16:20.939702325Z" level=info msg="CreateContainer within sandbox \"62bd2ed0bef2c42473f19420eecb01d5f6004e9c5df46f8bde447a04c05f97d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"17d75370adaa98d30ed151057b3ab7263a98bd23cb31b67d7f08d7c9d4af746b\"" Apr 30 00:16:20.941257 containerd[1596]: time="2025-04-30T00:16:20.941206840Z" level=info msg="StartContainer for \"17d75370adaa98d30ed151057b3ab7263a98bd23cb31b67d7f08d7c9d4af746b\"" Apr 30 00:16:21.421461 containerd[1596]: time="2025-04-30T00:16:21.421366537Z" level=info msg="StartContainer for \"17d75370adaa98d30ed151057b3ab7263a98bd23cb31b67d7f08d7c9d4af746b\" returns successfully" Apr 30 00:16:21.580914 kubelet[2910]: I0430 00:16:21.580227 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-676f79f8bf-5c5xv" podStartSLOduration=41.317092538 podStartE2EDuration="53.58019575s" podCreationTimestamp="2025-04-30 00:15:28 +0000 UTC" firstStartedPulling="2025-04-30 00:16:07.872320493 +0000 UTC m=+68.927935023" lastFinishedPulling="2025-04-30 00:16:20.135423715 +0000 UTC m=+81.191038235" observedRunningTime="2025-04-30 00:16:21.576471669 +0000 UTC m=+82.632086210" watchObservedRunningTime="2025-04-30 00:16:21.58019575 +0000 UTC m=+82.635810301" Apr 30 00:16:22.259495 kubelet[2910]: E0430 00:16:22.258974 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:23.064806 kubelet[2910]: E0430 00:16:23.064310 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:23.155034 containerd[1596]: time="2025-04-30T00:16:23.154940411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:23.157310 containerd[1596]: time="2025-04-30T00:16:23.156842028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 00:16:23.165466 containerd[1596]: time="2025-04-30T00:16:23.164655295Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:23.169807 containerd[1596]: time="2025-04-30T00:16:23.169657400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:23.170759 containerd[1596]: time="2025-04-30T00:16:23.170709579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.033053618s" Apr 30 00:16:23.170838 containerd[1596]: time="2025-04-30T00:16:23.170765207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 00:16:23.178911 containerd[1596]: time="2025-04-30T00:16:23.178804780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:16:23.180715 containerd[1596]: time="2025-04-30T00:16:23.180641300Z" level=info msg="CreateContainer within sandbox \"ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:16:23.300657 containerd[1596]: time="2025-04-30T00:16:23.300525356Z" level=info msg="CreateContainer within sandbox \"ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2c4d0abebe3f5c538cc66427b59215555ab70777b722079baabe046ae2678c1a\"" Apr 30 00:16:23.302353 containerd[1596]: time="2025-04-30T00:16:23.302183823Z" level=info msg="StartContainer for \"2c4d0abebe3f5c538cc66427b59215555ab70777b722079baabe046ae2678c1a\"" Apr 30 00:16:23.358667 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). Apr 30 00:16:23.413448 containerd[1596]: time="2025-04-30T00:16:23.413327364Z" level=info msg="StartContainer for \"2c4d0abebe3f5c538cc66427b59215555ab70777b722079baabe046ae2678c1a\" returns successfully" Apr 30 00:16:23.430106 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:23.432303 sshd-session[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:23.439454 systemd-logind[1582]: New session 22 of user core. Apr 30 00:16:23.448479 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:16:23.590823 containerd[1596]: time="2025-04-30T00:16:23.590759760Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:23.594470 containerd[1596]: time="2025-04-30T00:16:23.594407104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:16:23.598102 containerd[1596]: time="2025-04-30T00:16:23.598035280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 419.168281ms" Apr 30 00:16:23.598102 containerd[1596]: time="2025-04-30T00:16:23.598087992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 00:16:23.599774 containerd[1596]: time="2025-04-30T00:16:23.599731400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:16:23.603687 containerd[1596]: time="2025-04-30T00:16:23.603625649Z" level=info msg="CreateContainer within sandbox \"49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:16:23.614426 sshd[5751]: Connection closed by 10.0.0.1 port 53802 Apr 30 00:16:23.615041 sshd-session[5731]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:23.623794 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:53802.service: Deactivated successfully. Apr 30 00:16:23.634341 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:16:23.635511 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:16:23.637591 systemd-logind[1582]: Removed session 22. Apr 30 00:16:23.641917 containerd[1596]: time="2025-04-30T00:16:23.641831503Z" level=info msg="CreateContainer within sandbox \"49be92dfba13027b2cd98d27906bf6f7582065be10d3fdff0a0b2adb17d3a240\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"409a32eb75f9a04552fd374b4586fb70937d727b06684fd71c93b4fce2a82076\"" Apr 30 00:16:23.642541 containerd[1596]: time="2025-04-30T00:16:23.642496495Z" level=info msg="StartContainer for \"409a32eb75f9a04552fd374b4586fb70937d727b06684fd71c93b4fce2a82076\"" Apr 30 00:16:23.735586 containerd[1596]: time="2025-04-30T00:16:23.735452013Z" level=info msg="StartContainer for \"409a32eb75f9a04552fd374b4586fb70937d727b06684fd71c93b4fce2a82076\" returns successfully" Apr 30 00:16:25.065988 kubelet[2910]: E0430 00:16:25.065732 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:25.529783 kubelet[2910]: I0430 00:16:25.529706 2910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:16:25.939806 containerd[1596]: time="2025-04-30T00:16:25.939635738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:25.976414 containerd[1596]: time="2025-04-30T00:16:25.976302134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 00:16:25.998124 containerd[1596]: time="2025-04-30T00:16:25.998035078Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:26.015510 containerd[1596]: time="2025-04-30T00:16:26.015405695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:26.016396 containerd[1596]: time="2025-04-30T00:16:26.016336053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.416551461s" Apr 30 00:16:26.016476 containerd[1596]: time="2025-04-30T00:16:26.016394135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 00:16:26.018938 containerd[1596]: time="2025-04-30T00:16:26.018870250Z" level=info msg="CreateContainer within sandbox \"ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:16:26.052861 containerd[1596]: time="2025-04-30T00:16:26.052778223Z" level=info msg="CreateContainer within sandbox \"ef97183d12a02d4b12a8136afcf8ce2a72a2020e3bbbd843aed3388cde013c40\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f4e53a3febac70ea2c25efbecbe5111dac357dbdd69034aa40f4ba28acb540e5\"" Apr 30 00:16:26.053857 containerd[1596]: time="2025-04-30T00:16:26.053797331Z" level=info msg="StartContainer for \"f4e53a3febac70ea2c25efbecbe5111dac357dbdd69034aa40f4ba28acb540e5\"" Apr 30 00:16:26.147389 containerd[1596]: time="2025-04-30T00:16:26.147220201Z" level=info msg="StartContainer for \"f4e53a3febac70ea2c25efbecbe5111dac357dbdd69034aa40f4ba28acb540e5\" returns successfully" Apr 30 00:16:26.293279 kubelet[2910]: I0430 00:16:26.293223 2910 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:16:26.293279 kubelet[2910]: I0430 00:16:26.293296 2910 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:16:26.549600 kubelet[2910]: I0430 00:16:26.549376 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-676f79f8bf-gn2h7" podStartSLOduration=44.775727692 podStartE2EDuration="58.549356376s" podCreationTimestamp="2025-04-30 00:15:28 +0000 UTC" firstStartedPulling="2025-04-30 00:16:09.825631778 +0000 UTC m=+70.881246298" lastFinishedPulling="2025-04-30 00:16:23.599260452 +0000 UTC m=+84.654874982" observedRunningTime="2025-04-30 00:16:24.539115257 +0000 UTC m=+85.594729777" watchObservedRunningTime="2025-04-30 00:16:26.549356376 +0000 UTC m=+87.604970896" Apr 30 00:16:28.626409 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:33426.service - OpenSSH per-connection server daemon (10.0.0.1:33426). Apr 30 00:16:28.678780 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 33426 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:28.681143 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:28.686649 systemd-logind[1582]: New session 23 of user core. Apr 30 00:16:28.693227 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:16:28.952081 sshd[5873]: Connection closed by 10.0.0.1 port 33426 Apr 30 00:16:28.952346 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:28.962558 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:33430.service - OpenSSH per-connection server daemon (10.0.0.1:33430). Apr 30 00:16:28.963486 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:33426.service: Deactivated successfully. Apr 30 00:16:28.970097 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:16:28.970878 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:16:28.972994 systemd-logind[1582]: Removed session 23. Apr 30 00:16:29.011207 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 33430 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:29.018795 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:29.033298 systemd-logind[1582]: New session 24 of user core. Apr 30 00:16:29.053172 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:16:29.645814 sshd[5890]: Connection closed by 10.0.0.1 port 33430 Apr 30 00:16:29.646058 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:29.654383 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:33434.service - OpenSSH per-connection server daemon (10.0.0.1:33434). Apr 30 00:16:29.654919 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:33430.service: Deactivated successfully. Apr 30 00:16:29.659458 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:16:29.660453 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:16:29.662388 systemd-logind[1582]: Removed session 24. Apr 30 00:16:29.699772 sshd[5898]: Accepted publickey for core from 10.0.0.1 port 33434 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:29.701529 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:29.706575 systemd-logind[1582]: New session 25 of user core. Apr 30 00:16:29.714517 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:16:31.696828 sshd[5904]: Connection closed by 10.0.0.1 port 33434 Apr 30 00:16:31.700000 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:31.715289 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:33438.service - OpenSSH per-connection server daemon (10.0.0.1:33438). Apr 30 00:16:31.716175 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:33434.service: Deactivated successfully. Apr 30 00:16:31.728510 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:16:31.740692 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:16:31.751991 systemd-logind[1582]: Removed session 25. Apr 30 00:16:31.800033 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 33438 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:31.801741 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:31.806784 systemd-logind[1582]: New session 26 of user core. Apr 30 00:16:31.816578 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:16:32.382149 sshd[5929]: Connection closed by 10.0.0.1 port 33438 Apr 30 00:16:32.382764 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:32.393276 systemd[1]: Started sshd@26-10.0.0.39:22-10.0.0.1:33450.service - OpenSSH per-connection server daemon (10.0.0.1:33450). Apr 30 00:16:32.393857 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:33438.service: Deactivated successfully. Apr 30 00:16:32.396239 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:16:32.398454 systemd-logind[1582]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:16:32.399920 systemd-logind[1582]: Removed session 26. Apr 30 00:16:32.434782 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 33450 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:32.436554 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:32.445083 systemd-logind[1582]: New session 27 of user core. Apr 30 00:16:32.450236 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:16:32.576727 sshd[5942]: Connection closed by 10.0.0.1 port 33450 Apr 30 00:16:32.577178 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:32.581919 systemd[1]: sshd@26-10.0.0.39:22-10.0.0.1:33450.service: Deactivated successfully. Apr 30 00:16:32.585477 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:16:32.586303 systemd-logind[1582]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:16:32.587568 systemd-logind[1582]: Removed session 27. Apr 30 00:16:33.063963 kubelet[2910]: E0430 00:16:33.063579 2910 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:16:34.881095 kubelet[2910]: I0430 00:16:34.880452 2910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:16:34.938876 kubelet[2910]: I0430 00:16:34.936037 2910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4t95w" podStartSLOduration=49.429332366 podStartE2EDuration="1m6.936016176s" podCreationTimestamp="2025-04-30 00:15:28 +0000 UTC" firstStartedPulling="2025-04-30 00:16:08.510459844 +0000 UTC m=+69.566074364" lastFinishedPulling="2025-04-30 00:16:26.017143644 +0000 UTC m=+87.072758174" observedRunningTime="2025-04-30 00:16:26.549694269 +0000 UTC m=+87.605308789" watchObservedRunningTime="2025-04-30 00:16:34.936016176 +0000 UTC m=+95.991630696" Apr 30 00:16:37.595309 systemd[1]: Started sshd@27-10.0.0.39:22-10.0.0.1:37478.service - OpenSSH per-connection server daemon (10.0.0.1:37478). Apr 30 00:16:37.640666 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 37478 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:37.642589 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:37.647927 systemd-logind[1582]: New session 28 of user core. Apr 30 00:16:37.662368 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:16:37.780258 sshd[5959]: Connection closed by 10.0.0.1 port 37478 Apr 30 00:16:37.780703 sshd-session[5956]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:37.785249 systemd[1]: sshd@27-10.0.0.39:22-10.0.0.1:37478.service: Deactivated successfully. Apr 30 00:16:37.788280 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:16:37.789315 systemd-logind[1582]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:16:37.790249 systemd-logind[1582]: Removed session 28. Apr 30 00:16:41.549629 systemd[1]: run-containerd-runc-k8s.io-35266a092232aa388a6be893c4b2a880ee402cfffcd06de765d5f53a36da53be-runc.AGaept.mount: Deactivated successfully. Apr 30 00:16:42.793298 systemd[1]: Started sshd@28-10.0.0.39:22-10.0.0.1:37492.service - OpenSSH per-connection server daemon (10.0.0.1:37492). Apr 30 00:16:42.833634 sshd[5997]: Accepted publickey for core from 10.0.0.1 port 37492 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:42.835702 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:42.840817 systemd-logind[1582]: New session 29 of user core. Apr 30 00:16:42.850230 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 00:16:42.976130 sshd[6000]: Connection closed by 10.0.0.1 port 37492 Apr 30 00:16:42.977913 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:42.982813 systemd[1]: sshd@28-10.0.0.39:22-10.0.0.1:37492.service: Deactivated successfully. Apr 30 00:16:42.986702 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 00:16:42.987740 systemd-logind[1582]: Session 29 logged out. Waiting for processes to exit. Apr 30 00:16:42.988874 systemd-logind[1582]: Removed session 29. Apr 30 00:16:47.994495 systemd[1]: Started sshd@29-10.0.0.39:22-10.0.0.1:51462.service - OpenSSH per-connection server daemon (10.0.0.1:51462). Apr 30 00:16:48.049488 sshd[6023]: Accepted publickey for core from 10.0.0.1 port 51462 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:48.051566 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:48.061872 systemd-logind[1582]: New session 30 of user core. Apr 30 00:16:48.074094 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 00:16:48.204420 sshd[6026]: Connection closed by 10.0.0.1 port 51462 Apr 30 00:16:48.204919 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:48.209412 systemd[1]: sshd@29-10.0.0.39:22-10.0.0.1:51462.service: Deactivated successfully. Apr 30 00:16:48.211828 systemd-logind[1582]: Session 30 logged out. Waiting for processes to exit. Apr 30 00:16:48.211917 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 00:16:48.213232 systemd-logind[1582]: Removed session 30. Apr 30 00:16:53.216298 systemd[1]: Started sshd@30-10.0.0.39:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Apr 30 00:16:53.259102 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:t5CZeHTK9TgBa9wQniEYTA8wyun/e3KKqj2lL09IO8w Apr 30 00:16:53.261282 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:53.266749 systemd-logind[1582]: New session 31 of user core. Apr 30 00:16:53.280524 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 00:16:53.548875 sshd[6067]: Connection closed by 10.0.0.1 port 51466 Apr 30 00:16:53.547036 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:53.552651 systemd-logind[1582]: Session 31 logged out. Waiting for processes to exit. Apr 30 00:16:53.555498 systemd[1]: sshd@30-10.0.0.39:22-10.0.0.1:51466.service: Deactivated successfully. Apr 30 00:16:53.562021 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 00:16:53.567374 systemd-logind[1582]: Removed session 31. Apr 30 00:16:53.971612 kernel: hrtimer: interrupt took 8359933 ns