Mar 17 17:37:56.892165 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:37:56.892185 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:37:56.892195 kernel: KASLR enabled Mar 17 17:37:56.892200 kernel: efi: EFI v2.7 by EDK II Mar 17 17:37:56.892206 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Mar 17 17:37:56.892211 kernel: random: crng init done Mar 17 17:37:56.892218 kernel: secureboot: Secure boot disabled Mar 17 17:37:56.892224 kernel: ACPI: Early table checksum verification disabled Mar 17 17:37:56.892230 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:37:56.892237 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:37:56.892243 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892249 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892255 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892261 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892268 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892275 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892281 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892287 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892294 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:37:56.892300 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:37:56.892306 kernel: NUMA: Failed to initialise from firmware Mar 17 17:37:56.892312 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:37:56.892318 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:37:56.892324 kernel: Zone ranges: Mar 17 17:37:56.892330 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:37:56.892337 kernel: DMA32 empty Mar 17 17:37:56.892343 kernel: Normal empty Mar 17 17:37:56.892349 kernel: Movable zone start for each node Mar 17 17:37:56.892355 kernel: Early memory node ranges Mar 17 17:37:56.892361 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Mar 17 17:37:56.892368 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:37:56.892374 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:37:56.892380 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:37:56.892386 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:37:56.892392 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:37:56.892398 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:37:56.892404 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:37:56.892411 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:37:56.892417 kernel: psci: probing for conduit method from ACPI. Mar 17 17:37:56.892424 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:37:56.892432 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:37:56.892439 kernel: psci: Trusted OS migration not required Mar 17 17:37:56.892445 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:37:56.892453 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:37:56.892460 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:37:56.892466 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:37:56.892473 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:37:56.892479 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:37:56.892486 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:37:56.892493 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:37:56.892499 kernel: CPU features: detected: Spectre-v4 Mar 17 17:37:56.892505 kernel: CPU features: detected: Spectre-BHB Mar 17 17:37:56.892512 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:37:56.892520 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:37:56.892526 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:37:56.892533 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:37:56.892539 kernel: alternatives: applying boot alternatives Mar 17 17:37:56.892547 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:37:56.892554 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:37:56.892560 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:37:56.892567 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:37:56.892574 kernel: Fallback order for Node 0: 0 Mar 17 17:37:56.892580 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:37:56.892587 kernel: Policy zone: DMA Mar 17 17:37:56.892594 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:37:56.892601 kernel: software IO TLB: area num 4. Mar 17 17:37:56.892607 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:37:56.892614 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) Mar 17 17:37:56.892634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:37:56.892641 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:37:56.892648 kernel: rcu: RCU event tracing is enabled. Mar 17 17:37:56.892655 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:37:56.892661 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:37:56.892668 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:37:56.892674 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:37:56.892681 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:37:56.892690 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:37:56.892697 kernel: GICv3: 256 SPIs implemented Mar 17 17:37:56.892703 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:37:56.892710 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:37:56.892722 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:37:56.892729 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:37:56.892736 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:37:56.892742 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:37:56.892749 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:37:56.892756 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:37:56.892763 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:37:56.892771 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:37:56.892777 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:37:56.892784 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:37:56.892791 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:37:56.892798 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:37:56.892804 kernel: arm-pv: using stolen time PV Mar 17 17:37:56.892811 kernel: Console: colour dummy device 80x25 Mar 17 17:37:56.892818 kernel: ACPI: Core revision 20230628 Mar 17 17:37:56.892825 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:37:56.892831 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:37:56.892840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:37:56.892846 kernel: landlock: Up and running. Mar 17 17:37:56.892853 kernel: SELinux: Initializing. Mar 17 17:37:56.892860 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:37:56.892867 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:37:56.892873 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:37:56.892880 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:37:56.892887 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:37:56.892893 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:37:56.892901 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:37:56.892908 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:37:56.892915 kernel: Remapping and enabling EFI services. Mar 17 17:37:56.892921 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:37:56.892928 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:37:56.892935 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:37:56.892941 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:37:56.892948 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:37:56.892955 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:37:56.892962 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:37:56.892970 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:37:56.892977 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:37:56.892989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:37:56.892997 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:37:56.893004 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:37:56.893012 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:37:56.893019 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:37:56.893026 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:37:56.893034 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:37:56.893042 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:37:56.893050 kernel: SMP: Total of 4 processors activated. Mar 17 17:37:56.893057 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:37:56.893064 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:37:56.893072 kernel: CPU features: detected: Common not Private translations Mar 17 17:37:56.893079 kernel: CPU features: detected: CRC32 instructions Mar 17 17:37:56.893086 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:37:56.893093 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:37:56.893102 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:37:56.893109 kernel: CPU features: detected: Privileged Access Never Mar 17 17:37:56.893117 kernel: CPU features: detected: RAS Extension Support Mar 17 17:37:56.893124 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:37:56.893131 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:37:56.893138 kernel: alternatives: applying system-wide alternatives Mar 17 17:37:56.893145 kernel: devtmpfs: initialized Mar 17 17:37:56.893152 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:37:56.893159 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:37:56.893168 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:37:56.893175 kernel: SMBIOS 3.0.0 present. Mar 17 17:37:56.893182 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:37:56.893189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:37:56.893196 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:37:56.893203 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:37:56.893210 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:37:56.893218 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:37:56.893225 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:37:56.893233 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:37:56.893240 kernel: cpuidle: using governor menu Mar 17 17:37:56.893247 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:37:56.893255 kernel: ASID allocator initialised with 32768 entries Mar 17 17:37:56.893261 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:37:56.893269 kernel: Serial: AMBA PL011 UART driver Mar 17 17:37:56.893276 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:37:56.893283 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:37:56.893290 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:37:56.893298 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:37:56.893305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:37:56.893312 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:37:56.893319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:37:56.893326 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:37:56.893333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:37:56.893340 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:37:56.893347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:37:56.893354 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:37:56.893362 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:37:56.893369 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:37:56.893376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:37:56.893384 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:37:56.893390 kernel: ACPI: Interpreter enabled Mar 17 17:37:56.893397 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:37:56.893404 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:37:56.893412 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:37:56.893419 kernel: printk: console [ttyAMA0] enabled Mar 17 17:37:56.893428 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:37:56.893558 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:37:56.893679 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:37:56.893781 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:37:56.893849 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:37:56.893912 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:37:56.893921 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:37:56.893933 kernel: PCI host bridge to bus 0000:00 Mar 17 17:37:56.894002 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:37:56.894063 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:37:56.894120 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:37:56.894176 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:37:56.894253 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:37:56.894327 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:37:56.894397 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:37:56.894461 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:37:56.894525 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:37:56.894593 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:37:56.894741 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:37:56.894810 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:37:56.894868 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:37:56.894930 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:37:56.894989 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:37:56.894998 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:37:56.895006 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:37:56.895013 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:37:56.895020 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:37:56.895027 kernel: iommu: Default domain type: Translated Mar 17 17:37:56.895034 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:37:56.895043 kernel: efivars: Registered efivars operations Mar 17 17:37:56.895051 kernel: vgaarb: loaded Mar 17 17:37:56.895058 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:37:56.895065 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:37:56.895072 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:37:56.895079 kernel: pnp: PnP ACPI init Mar 17 17:37:56.895167 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:37:56.895177 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:37:56.895186 kernel: NET: Registered PF_INET protocol family Mar 17 17:37:56.895193 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:37:56.895201 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:37:56.895208 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:37:56.895215 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:37:56.895222 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:37:56.895230 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:37:56.895237 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:37:56.895245 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:37:56.895254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:37:56.895261 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:37:56.895268 kernel: kvm [1]: HYP mode not available Mar 17 17:37:56.895275 kernel: Initialise system trusted keyrings Mar 17 17:37:56.895282 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:37:56.895289 kernel: Key type asymmetric registered Mar 17 17:37:56.895295 kernel: Asymmetric key parser 'x509' registered Mar 17 17:37:56.895303 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:37:56.895310 kernel: io scheduler mq-deadline registered Mar 17 17:37:56.895319 kernel: io scheduler kyber registered Mar 17 17:37:56.895326 kernel: io scheduler bfq registered Mar 17 17:37:56.895333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:37:56.895340 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:37:56.895347 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:37:56.895412 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:37:56.895422 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:37:56.895429 kernel: thunder_xcv, ver 1.0 Mar 17 17:37:56.895436 kernel: thunder_bgx, ver 1.0 Mar 17 17:37:56.895444 kernel: nicpf, ver 1.0 Mar 17 17:37:56.895451 kernel: nicvf, ver 1.0 Mar 17 17:37:56.895523 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:37:56.895587 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:37:56 UTC (1742233076) Mar 17 17:37:56.895596 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:37:56.895604 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:37:56.895611 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:37:56.895627 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:37:56.895638 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:37:56.895645 kernel: Segment Routing with IPv6 Mar 17 17:37:56.895652 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:37:56.895659 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:37:56.895666 kernel: Key type dns_resolver registered Mar 17 17:37:56.895673 kernel: registered taskstats version 1 Mar 17 17:37:56.895680 kernel: Loading compiled-in X.509 certificates Mar 17 17:37:56.895687 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:37:56.895695 kernel: Key type .fscrypt registered Mar 17 17:37:56.895703 kernel: Key type fscrypt-provisioning registered Mar 17 17:37:56.895710 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:37:56.895723 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:37:56.895731 kernel: ima: No architecture policies found Mar 17 17:37:56.895738 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:37:56.895745 kernel: clk: Disabling unused clocks Mar 17 17:37:56.895753 kernel: Freeing unused kernel memory: 39744K Mar 17 17:37:56.895760 kernel: Run /init as init process Mar 17 17:37:56.895767 kernel: with arguments: Mar 17 17:37:56.895776 kernel: /init Mar 17 17:37:56.895784 kernel: with environment: Mar 17 17:37:56.895791 kernel: HOME=/ Mar 17 17:37:56.895798 kernel: TERM=linux Mar 17 17:37:56.895805 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:37:56.895815 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:37:56.895824 systemd[1]: Detected virtualization kvm. Mar 17 17:37:56.895833 systemd[1]: Detected architecture arm64. Mar 17 17:37:56.895842 systemd[1]: Running in initrd. Mar 17 17:37:56.895849 systemd[1]: No hostname configured, using default hostname. Mar 17 17:37:56.895867 systemd[1]: Hostname set to . Mar 17 17:37:56.895875 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:37:56.895883 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:37:56.895892 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:37:56.895900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:37:56.895908 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:37:56.895917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:37:56.895925 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:37:56.895933 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:37:56.895942 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:37:56.895950 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:37:56.895957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:37:56.895965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:37:56.895974 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:37:56.895981 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:37:56.895989 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:37:56.895997 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:37:56.896004 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:37:56.896012 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:37:56.896020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:37:56.896028 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:37:56.896037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:37:56.896044 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:37:56.896052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:37:56.896060 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:37:56.896067 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:37:56.896075 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:37:56.896082 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:37:56.896090 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:37:56.896098 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:37:56.896107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:37:56.896115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:56.896122 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:37:56.896130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:37:56.896137 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:37:56.896146 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:37:56.896173 systemd-journald[239]: Collecting audit messages is disabled. Mar 17 17:37:56.896192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:56.896202 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:37:56.896211 systemd-journald[239]: Journal started Mar 17 17:37:56.896230 systemd-journald[239]: Runtime Journal (/run/log/journal/982dcfdc2e934050b3663028e5556a36) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:37:56.886968 systemd-modules-load[240]: Inserted module 'overlay' Mar 17 17:37:56.899637 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:37:56.901668 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:37:56.904680 kernel: Bridge firewalling registered Mar 17 17:37:56.903652 systemd-modules-load[240]: Inserted module 'br_netfilter' Mar 17 17:37:56.906785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:37:56.909396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:37:56.912115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:37:56.914333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:37:56.919828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:37:56.920750 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:37:56.922756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:37:56.927738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:56.935838 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:37:56.936770 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:37:56.941189 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:37:56.950380 dracut-cmdline[277]: dracut-dracut-053 Mar 17 17:37:56.953038 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:37:56.968667 systemd-resolved[283]: Positive Trust Anchors: Mar 17 17:37:56.968750 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:37:56.968782 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:37:56.973549 systemd-resolved[283]: Defaulting to hostname 'linux'. Mar 17 17:37:56.974747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:37:56.975730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:37:57.022654 kernel: SCSI subsystem initialized Mar 17 17:37:57.027637 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:37:57.034640 kernel: iscsi: registered transport (tcp) Mar 17 17:37:57.047682 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:37:57.047735 kernel: QLogic iSCSI HBA Driver Mar 17 17:37:57.091987 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:37:57.107817 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:37:57.126882 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:37:57.126928 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:37:57.128167 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:37:57.173648 kernel: raid6: neonx8 gen() 15771 MB/s Mar 17 17:37:57.190665 kernel: raid6: neonx4 gen() 15648 MB/s Mar 17 17:37:57.207635 kernel: raid6: neonx2 gen() 13230 MB/s Mar 17 17:37:57.224637 kernel: raid6: neonx1 gen() 10472 MB/s Mar 17 17:37:57.241647 kernel: raid6: int64x8 gen() 6968 MB/s Mar 17 17:37:57.258636 kernel: raid6: int64x4 gen() 7341 MB/s Mar 17 17:37:57.275635 kernel: raid6: int64x2 gen() 6125 MB/s Mar 17 17:37:57.292638 kernel: raid6: int64x1 gen() 5058 MB/s Mar 17 17:37:57.292652 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Mar 17 17:37:57.309639 kernel: raid6: .... xor() 11934 MB/s, rmw enabled Mar 17 17:37:57.309651 kernel: raid6: using neon recovery algorithm Mar 17 17:37:57.314636 kernel: xor: measuring software checksum speed Mar 17 17:37:57.314651 kernel: 8regs : 19793 MB/sec Mar 17 17:37:57.314661 kernel: 32regs : 18765 MB/sec Mar 17 17:37:57.315968 kernel: arm64_neon : 27061 MB/sec Mar 17 17:37:57.315980 kernel: xor: using function: arm64_neon (27061 MB/sec) Mar 17 17:37:57.366642 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:37:57.377356 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:37:57.394853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:37:57.407905 systemd-udevd[462]: Using default interface naming scheme 'v255'. Mar 17 17:37:57.411001 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:37:57.416779 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:37:57.427970 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Mar 17 17:37:57.456660 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:37:57.467858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:37:57.506744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:37:57.516761 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:37:57.526249 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:37:57.528032 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:37:57.529161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:37:57.530667 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:37:57.543788 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:37:57.548318 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:37:57.557978 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:37:57.558083 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:37:57.558094 kernel: GPT:9289727 != 19775487 Mar 17 17:37:57.558103 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:37:57.558113 kernel: GPT:9289727 != 19775487 Mar 17 17:37:57.558121 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:37:57.558138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:37:57.556959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:37:57.557019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:57.558122 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:37:57.559005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:37:57.559055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:57.560693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:57.570861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:57.573674 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:37:57.577322 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (505) Mar 17 17:37:57.577347 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (521) Mar 17 17:37:57.584595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:57.589146 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:37:57.593472 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:37:57.599739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:37:57.600792 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:37:57.605874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:37:57.621785 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:37:57.623892 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:37:57.627451 disk-uuid[551]: Primary Header is updated. Mar 17 17:37:57.627451 disk-uuid[551]: Secondary Entries is updated. Mar 17 17:37:57.627451 disk-uuid[551]: Secondary Header is updated. Mar 17 17:37:57.630653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:37:57.644264 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:58.641655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:37:58.643902 disk-uuid[552]: The operation has completed successfully. Mar 17 17:37:58.661526 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:37:58.661636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:37:58.683964 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:37:58.690897 sh[573]: Success Mar 17 17:37:58.708143 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:37:58.736752 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:37:58.748989 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:37:58.751002 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:37:58.762118 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:37:58.762170 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:58.762181 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:37:58.762191 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:37:58.762201 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:37:58.766127 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:37:58.767353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:37:58.774809 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:37:58.776252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:37:58.784111 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:37:58.784156 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:58.784175 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:37:58.786647 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:37:58.794159 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:37:58.795106 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:37:58.799942 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:37:58.807860 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:37:58.878676 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:37:58.894804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:37:58.913520 ignition[662]: Ignition 2.20.0 Mar 17 17:37:58.913530 ignition[662]: Stage: fetch-offline Mar 17 17:37:58.913564 ignition[662]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:58.913575 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:37:58.913805 ignition[662]: parsed url from cmdline: "" Mar 17 17:37:58.913809 ignition[662]: no config URL provided Mar 17 17:37:58.913813 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:37:58.913821 ignition[662]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:37:58.913851 ignition[662]: op(1): [started] loading QEMU firmware config module Mar 17 17:37:58.913856 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:37:58.921176 ignition[662]: op(1): [finished] loading QEMU firmware config module Mar 17 17:37:58.921571 systemd-networkd[765]: lo: Link UP Mar 17 17:37:58.921575 systemd-networkd[765]: lo: Gained carrier Mar 17 17:37:58.922295 systemd-networkd[765]: Enumeration completed Mar 17 17:37:58.922391 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:37:58.922744 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:37:58.922748 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:37:58.923555 systemd-networkd[765]: eth0: Link UP Mar 17 17:37:58.923558 systemd-networkd[765]: eth0: Gained carrier Mar 17 17:37:58.923564 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:37:58.924872 systemd[1]: Reached target network.target - Network. Mar 17 17:37:58.946678 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:37:58.970015 ignition[662]: parsing config with SHA512: c9a9f862c15846e42a07717bc7d64a8f07c580b3701fef6beb2da7f845e5f237367bd4e66e1e1facb545448c7a504e0275c7df28afbd09067b6b031960ec81ad Mar 17 17:37:58.976543 unknown[662]: fetched base config from "system" Mar 17 17:37:58.976555 unknown[662]: fetched user config from "qemu" Mar 17 17:37:58.977022 ignition[662]: fetch-offline: fetch-offline passed Mar 17 17:37:58.978706 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:37:58.977102 ignition[662]: Ignition finished successfully Mar 17 17:37:58.979784 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:37:58.990866 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:37:59.002004 ignition[772]: Ignition 2.20.0 Mar 17 17:37:59.002013 ignition[772]: Stage: kargs Mar 17 17:37:59.002180 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:59.002189 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:37:59.003192 ignition[772]: kargs: kargs passed Mar 17 17:37:59.006001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:37:59.003240 ignition[772]: Ignition finished successfully Mar 17 17:37:59.017854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:37:59.028401 ignition[780]: Ignition 2.20.0 Mar 17 17:37:59.028411 ignition[780]: Stage: disks Mar 17 17:37:59.028583 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:59.028592 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:37:59.029522 ignition[780]: disks: disks passed Mar 17 17:37:59.029567 ignition[780]: Ignition finished successfully Mar 17 17:37:59.031690 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:37:59.032646 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:37:59.033677 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:37:59.035137 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:37:59.035892 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:37:59.037460 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:37:59.047828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:37:59.058356 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:37:59.061474 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:37:59.072749 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:37:59.113641 kernel: EXT4-fs (vda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:37:59.114373 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:37:59.115462 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:37:59.126719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:37:59.128521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:37:59.129357 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:37:59.129399 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:37:59.129421 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:37:59.137315 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:37:59.140176 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:37:59.142746 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Mar 17 17:37:59.142784 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:37:59.144248 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:59.144281 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:37:59.146630 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:37:59.147687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:37:59.184590 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:37:59.188658 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:37:59.192696 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:37:59.197050 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:37:59.275005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:37:59.286735 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:37:59.288207 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:37:59.293649 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:37:59.311415 ignition[914]: INFO : Ignition 2.20.0 Mar 17 17:37:59.311415 ignition[914]: INFO : Stage: mount Mar 17 17:37:59.312744 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:59.312744 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:37:59.312744 ignition[914]: INFO : mount: mount passed Mar 17 17:37:59.312744 ignition[914]: INFO : Ignition finished successfully Mar 17 17:37:59.312215 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:37:59.315139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:37:59.327729 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:37:59.760158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:37:59.773806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:37:59.779636 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Mar 17 17:37:59.782130 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:37:59.782145 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:59.782155 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:37:59.784643 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:37:59.785199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:37:59.807750 ignition[944]: INFO : Ignition 2.20.0 Mar 17 17:37:59.807750 ignition[944]: INFO : Stage: files Mar 17 17:37:59.808993 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:59.808993 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:37:59.808993 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:37:59.811677 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:37:59.811677 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:37:59.813860 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:37:59.813860 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:37:59.813860 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:37:59.813860 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:37:59.812613 unknown[944]: wrote ssh authorized keys file for user: core Mar 17 17:37:59.818997 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:37:59.882196 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:38:00.006481 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:38:00.008285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:38:00.021795 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:38:00.021795 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:38:00.021795 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 17:38:00.295831 systemd-networkd[765]: eth0: Gained IPv6LL Mar 17 17:38:00.358810 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:38:00.655091 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:38:00.655091 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:38:00.657751 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:38:00.679756 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:38:00.683479 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:38:00.684738 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:38:00.684738 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:38:00.684738 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:38:00.684738 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:38:00.684738 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:38:00.684738 ignition[944]: INFO : files: files passed Mar 17 17:38:00.684738 ignition[944]: INFO : Ignition finished successfully Mar 17 17:38:00.686087 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:38:00.696749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:38:00.699173 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:38:00.700274 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:38:00.700351 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:38:00.705892 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:38:00.709059 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:38:00.709059 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:38:00.711490 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:38:00.711046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:38:00.712775 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:38:00.723757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:38:00.742252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:38:00.742352 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:38:00.744277 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:38:00.745068 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:38:00.746911 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:38:00.759776 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:38:00.770516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:38:00.783857 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:38:00.792302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:38:00.793241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:38:00.795014 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:38:00.796561 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:38:00.796698 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:38:00.798982 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:38:00.800727 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:38:00.802192 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:38:00.803585 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:38:00.805283 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:38:00.806963 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:38:00.808479 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:38:00.810103 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:38:00.811745 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:38:00.813340 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:38:00.814714 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:38:00.814826 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:38:00.817037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:38:00.818776 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:38:00.820546 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:38:00.823674 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:38:00.824674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:38:00.824781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:38:00.827401 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:38:00.827506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:38:00.829340 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:38:00.830847 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:38:00.834661 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:38:00.835586 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:38:00.837141 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:38:00.838412 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:38:00.838498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:38:00.839587 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:38:00.839684 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:38:00.840789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:38:00.840901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:38:00.842152 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:38:00.842247 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:38:00.851778 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:38:00.852436 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:38:00.852553 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:38:00.855318 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:38:00.856567 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:38:00.856729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:38:00.858312 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:38:00.858479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:38:00.863396 ignition[998]: INFO : Ignition 2.20.0 Mar 17 17:38:00.863396 ignition[998]: INFO : Stage: umount Mar 17 17:38:00.863396 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:00.863396 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:00.864526 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:38:00.869460 ignition[998]: INFO : umount: umount passed Mar 17 17:38:00.869460 ignition[998]: INFO : Ignition finished successfully Mar 17 17:38:00.864608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:38:00.865859 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:38:00.866000 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:38:00.867804 systemd[1]: Stopped target network.target - Network. Mar 17 17:38:00.868834 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:38:00.868896 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:38:00.870343 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:38:00.870388 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:38:00.875874 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:38:00.875929 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:38:00.876643 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:38:00.876681 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:38:00.878071 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:38:00.879455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:38:00.881431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:38:00.891294 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:38:00.891397 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:38:00.891663 systemd-networkd[765]: eth0: DHCPv6 lease lost Mar 17 17:38:00.893359 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:38:00.893484 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:38:00.895543 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:38:00.895584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:38:00.902691 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:38:00.903519 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:38:00.903571 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:38:00.905321 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:38:00.905361 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:00.906936 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:38:00.906980 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:38:00.908748 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:38:00.908789 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:38:00.910493 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:38:00.923872 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:38:00.923991 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:38:00.925870 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:38:00.925992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:38:00.927653 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:38:00.927735 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:38:00.929774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:38:00.929827 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:38:00.931201 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:38:00.931235 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:38:00.932643 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:38:00.932696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:38:00.934889 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:38:00.934931 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:38:00.937229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:38:00.937272 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:38:00.939575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:38:00.939615 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:38:00.954765 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:38:00.955829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:38:00.955887 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:38:00.957858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:38:00.957903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:00.959952 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:38:00.961650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:38:00.963515 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:38:00.965482 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:38:00.974219 systemd[1]: Switching root. Mar 17 17:38:01.004354 systemd-journald[239]: Journal stopped Mar 17 17:38:01.673684 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Mar 17 17:38:01.673737 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:38:01.673749 kernel: SELinux: policy capability open_perms=1 Mar 17 17:38:01.673762 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:38:01.673771 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:38:01.673781 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:38:01.673790 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:38:01.673800 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:38:01.673809 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:38:01.673820 kernel: audit: type=1403 audit(1742233081.145:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:38:01.673841 systemd[1]: Successfully loaded SELinux policy in 30.050ms. Mar 17 17:38:01.673859 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.749ms. Mar 17 17:38:01.673872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:38:01.673887 systemd[1]: Detected virtualization kvm. Mar 17 17:38:01.673898 systemd[1]: Detected architecture arm64. Mar 17 17:38:01.673908 systemd[1]: Detected first boot. Mar 17 17:38:01.673919 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:38:01.673931 zram_generator::config[1042]: No configuration found. Mar 17 17:38:01.673942 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:38:01.673952 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:38:01.673964 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:38:01.673975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:38:01.673986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:38:01.673996 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:38:01.674007 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:38:01.674018 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:38:01.674028 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:38:01.674039 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:38:01.674049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:38:01.674061 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:38:01.674072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:38:01.674082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:38:01.674093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:38:01.674103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:38:01.674113 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:38:01.674124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:38:01.674135 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:38:01.674145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:38:01.674158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:38:01.674168 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:38:01.674182 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:38:01.674192 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:38:01.674202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:38:01.674217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:38:01.674231 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:38:01.674242 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:38:01.674255 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:38:01.674266 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:38:01.674276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:38:01.674286 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:38:01.674296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:38:01.674307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:38:01.674318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:38:01.674328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:38:01.674339 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:38:01.674350 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:38:01.674365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:38:01.674375 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:38:01.674386 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:38:01.674398 systemd[1]: Reached target machines.target - Containers. Mar 17 17:38:01.674408 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:38:01.674419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:01.674429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:38:01.674441 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:38:01.674452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:01.674462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:38:01.674473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:01.674483 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:38:01.674494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:01.674504 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:38:01.674514 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:38:01.674526 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:38:01.674537 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:38:01.674547 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:38:01.674557 kernel: loop: module loaded Mar 17 17:38:01.674567 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:38:01.674578 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:38:01.674588 kernel: ACPI: bus type drm_connector registered Mar 17 17:38:01.674598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:38:01.674609 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:38:01.674707 kernel: fuse: init (API version 7.39) Mar 17 17:38:01.674720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:38:01.674731 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:38:01.674741 systemd[1]: Stopped verity-setup.service. Mar 17 17:38:01.674771 systemd-journald[1111]: Collecting audit messages is disabled. Mar 17 17:38:01.674808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:38:01.674820 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:38:01.674839 systemd-journald[1111]: Journal started Mar 17 17:38:01.674864 systemd-journald[1111]: Runtime Journal (/run/log/journal/982dcfdc2e934050b3663028e5556a36) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:38:01.490473 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:38:01.512138 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:38:01.512478 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:38:01.677640 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:38:01.678099 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:38:01.679024 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:38:01.680018 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:38:01.681003 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:38:01.682702 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:38:01.683929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:38:01.685055 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:38:01.685191 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:38:01.686341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:01.686472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:01.687722 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:38:01.687873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:38:01.688893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:01.689023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:01.690293 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:38:01.690424 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:38:01.691573 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:01.691759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:01.692787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:38:01.694011 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:38:01.695323 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:38:01.707025 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:38:01.719747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:38:01.721644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:38:01.722445 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:38:01.722482 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:38:01.724130 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:38:01.726036 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:38:01.727804 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:38:01.728685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:01.730186 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:38:01.731948 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:38:01.732925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:38:01.736799 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:38:01.738755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:38:01.739740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:38:01.742134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:38:01.747452 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:38:01.750246 systemd-journald[1111]: Time spent on flushing to /var/log/journal/982dcfdc2e934050b3663028e5556a36 is 38.508ms for 855 entries. Mar 17 17:38:01.750246 systemd-journald[1111]: System Journal (/var/log/journal/982dcfdc2e934050b3663028e5556a36) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:38:01.796929 systemd-journald[1111]: Received client request to flush runtime journal. Mar 17 17:38:01.797033 kernel: loop0: detected capacity change from 0 to 116808 Mar 17 17:38:01.797057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:38:01.750844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:38:01.758812 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:38:01.760121 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:38:01.762008 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:38:01.763853 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:38:01.768612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:38:01.778913 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:38:01.781305 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:38:01.782495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:01.796181 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:38:01.801012 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:38:01.805639 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 17:38:01.808056 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:38:01.808737 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:38:01.812731 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:38:01.819828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:38:01.839326 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 17 17:38:01.839346 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 17 17:38:01.843636 kernel: loop2: detected capacity change from 0 to 113536 Mar 17 17:38:01.843883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:38:01.882644 kernel: loop3: detected capacity change from 0 to 116808 Mar 17 17:38:01.886639 kernel: loop4: detected capacity change from 0 to 189592 Mar 17 17:38:01.891640 kernel: loop5: detected capacity change from 0 to 113536 Mar 17 17:38:01.896067 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:38:01.896462 (sd-merge)[1178]: Merged extensions into '/usr'. Mar 17 17:38:01.900402 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:38:01.900420 systemd[1]: Reloading... Mar 17 17:38:01.964649 zram_generator::config[1210]: No configuration found. Mar 17 17:38:02.017857 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:38:02.050657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:02.085461 systemd[1]: Reloading finished in 184 ms. Mar 17 17:38:02.116059 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:38:02.117682 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:38:02.136071 systemd[1]: Starting ensure-sysext.service... Mar 17 17:38:02.138068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:38:02.149740 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:38:02.149852 systemd[1]: Reloading... Mar 17 17:38:02.158551 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:38:02.158970 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:38:02.159600 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:38:02.159825 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Mar 17 17:38:02.159886 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Mar 17 17:38:02.162500 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:38:02.162512 systemd-tmpfiles[1239]: Skipping /boot Mar 17 17:38:02.169677 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:38:02.169691 systemd-tmpfiles[1239]: Skipping /boot Mar 17 17:38:02.194871 zram_generator::config[1266]: No configuration found. Mar 17 17:38:02.276981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:02.311667 systemd[1]: Reloading finished in 161 ms. Mar 17 17:38:02.325441 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:38:02.336985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:38:02.344354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:38:02.346916 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:38:02.349090 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:38:02.352944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:38:02.355962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:38:02.358111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:38:02.360943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:02.365589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:02.367708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:02.370042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:02.371538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:02.372345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:02.372616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:02.376301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:02.376468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:02.378237 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:02.378360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:02.387301 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:38:02.389719 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:38:02.395778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:02.397702 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Mar 17 17:38:02.404958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:02.407025 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:38:02.412899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:02.417010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:02.417933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:02.421309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:38:02.424373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:38:02.426369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:38:02.428369 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:38:02.431361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:02.431502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:02.433141 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:38:02.433276 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:38:02.434995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:02.435133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:02.437021 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:02.438080 augenrules[1360]: No rules Mar 17 17:38:02.437159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:02.438892 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:38:02.439057 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:38:02.445061 systemd[1]: Finished ensure-sysext.service. Mar 17 17:38:02.455667 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:38:02.469827 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:38:02.470958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:38:02.471021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:38:02.472839 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:38:02.478696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:38:02.482682 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:38:02.485638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1350) Mar 17 17:38:02.498156 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:38:02.515818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:38:02.520942 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:38:02.537047 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:38:02.568781 systemd-networkd[1378]: lo: Link UP Mar 17 17:38:02.568790 systemd-networkd[1378]: lo: Gained carrier Mar 17 17:38:02.569539 systemd-networkd[1378]: Enumeration completed Mar 17 17:38:02.569727 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:38:02.570464 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:38:02.570467 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:38:02.571172 systemd-networkd[1378]: eth0: Link UP Mar 17 17:38:02.571182 systemd-networkd[1378]: eth0: Gained carrier Mar 17 17:38:02.571194 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:38:02.579850 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:38:02.580781 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:38:02.581860 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:38:02.584342 systemd-resolved[1305]: Positive Trust Anchors: Mar 17 17:38:02.584483 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:38:02.584515 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:38:02.591305 systemd-resolved[1305]: Defaulting to hostname 'linux'. Mar 17 17:38:02.591722 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:38:02.592462 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Mar 17 17:38:02.592967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:38:02.594364 systemd[1]: Reached target network.target - Network. Mar 17 17:38:02.594584 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:38:02.594644 systemd-timesyncd[1379]: Initial clock synchronization to Mon 2025-03-17 17:38:02.484271 UTC. Mar 17 17:38:02.595171 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:38:02.610918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:38:02.615582 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:38:02.618539 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:38:02.640396 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:38:02.659568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:02.686194 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:38:02.687398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:38:02.688287 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:38:02.689154 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:38:02.690091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:38:02.691221 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:38:02.692127 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:38:02.693204 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:38:02.694104 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:38:02.694137 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:38:02.694774 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:38:02.696328 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:38:02.698519 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:38:02.703524 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:38:02.705581 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:38:02.706967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:38:02.707878 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:38:02.708585 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:38:02.709493 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:38:02.709523 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:38:02.710497 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:38:02.712425 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:38:02.713560 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:38:02.714764 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:38:02.719037 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:38:02.720063 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:38:02.728715 jq[1410]: false Mar 17 17:38:02.725551 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:38:02.728476 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:38:02.733183 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:38:02.737105 extend-filesystems[1411]: Found loop3 Mar 17 17:38:02.737575 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:38:02.741214 extend-filesystems[1411]: Found loop4 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found loop5 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda1 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda2 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda3 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found usr Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda4 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda6 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda7 Mar 17 17:38:02.741214 extend-filesystems[1411]: Found vda9 Mar 17 17:38:02.741214 extend-filesystems[1411]: Checking size of /dev/vda9 Mar 17 17:38:02.741659 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:38:02.763055 dbus-daemon[1409]: [system] SELinux support is enabled Mar 17 17:38:02.743245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:38:02.744301 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:38:02.745266 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:38:02.747796 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:38:02.765215 jq[1425]: true Mar 17 17:38:02.750925 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:38:02.756012 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:38:02.756158 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:38:02.756413 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:38:02.756543 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:38:02.761439 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:38:02.761599 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:38:02.764741 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:38:02.776327 extend-filesystems[1411]: Resized partition /dev/vda9 Mar 17 17:38:02.778792 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:38:02.780110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:38:02.780191 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:38:02.781910 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:38:02.781946 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:38:02.784814 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:38:02.796637 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:38:02.796725 tar[1429]: linux-arm64/helm Mar 17 17:38:02.796974 jq[1431]: true Mar 17 17:38:02.800715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1352) Mar 17 17:38:02.806319 update_engine[1424]: I20250317 17:38:02.806052 1424 main.cc:92] Flatcar Update Engine starting Mar 17 17:38:02.813413 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:38:02.818939 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:38:02.823643 update_engine[1424]: I20250317 17:38:02.820014 1424 update_check_scheduler.cc:74] Next update check in 2m2s Mar 17 17:38:02.832657 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:38:02.845908 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:38:02.845908 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:38:02.845908 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:38:02.857611 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Mar 17 17:38:02.859270 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:38:02.846035 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:38:02.846781 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:38:02.846969 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:38:02.847397 systemd-logind[1422]: New seat seat0. Mar 17 17:38:02.851649 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:38:02.854234 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:38:02.856579 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:38:02.913904 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:38:03.021475 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:38:03.037662 containerd[1432]: time="2025-03-17T17:38:03.037560633Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:38:03.045109 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:38:03.057914 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:38:03.063084 containerd[1432]: time="2025-03-17T17:38:03.063021813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.064312 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:38:03.064505 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:38:03.064782 containerd[1432]: time="2025-03-17T17:38:03.064745690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:03.064782 containerd[1432]: time="2025-03-17T17:38:03.064779300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:38:03.064846 containerd[1432]: time="2025-03-17T17:38:03.064795474Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:38:03.064963 containerd[1432]: time="2025-03-17T17:38:03.064943960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:38:03.065006 containerd[1432]: time="2025-03-17T17:38:03.064978004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065080 containerd[1432]: time="2025-03-17T17:38:03.065038124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065080 containerd[1432]: time="2025-03-17T17:38:03.065055837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065234 containerd[1432]: time="2025-03-17T17:38:03.065213987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065263 containerd[1432]: time="2025-03-17T17:38:03.065235053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065263 containerd[1432]: time="2025-03-17T17:38:03.065248466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065263 containerd[1432]: time="2025-03-17T17:38:03.065257421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065343 containerd[1432]: time="2025-03-17T17:38:03.065323655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065642 containerd[1432]: time="2025-03-17T17:38:03.065501412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065642 containerd[1432]: time="2025-03-17T17:38:03.065599206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:03.065642 containerd[1432]: time="2025-03-17T17:38:03.065634552Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:38:03.065877 containerd[1432]: time="2025-03-17T17:38:03.065705284Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:38:03.065877 containerd[1432]: time="2025-03-17T17:38:03.065748993Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:38:03.069954 containerd[1432]: time="2025-03-17T17:38:03.069921739Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:38:03.070015 containerd[1432]: time="2025-03-17T17:38:03.069972746Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:38:03.070015 containerd[1432]: time="2025-03-17T17:38:03.069992549Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:38:03.070015 containerd[1432]: time="2025-03-17T17:38:03.070009434Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:38:03.070066 containerd[1432]: time="2025-03-17T17:38:03.070024937Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:38:03.070228 containerd[1432]: time="2025-03-17T17:38:03.070158866Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:38:03.070523 containerd[1432]: time="2025-03-17T17:38:03.070497771Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070775847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070810522Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070834310Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070850129Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070862871Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070875810Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070898020Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070914075Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070928238Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070940467Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070951828Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070972223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.070989975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071136 containerd[1432]: time="2025-03-17T17:38:03.071002165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071013684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071026978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071042600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071059129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071071161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071083942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071099051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071110452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071121734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071133530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071147771Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071168047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071186667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071455 containerd[1432]: time="2025-03-17T17:38:03.071199370Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071595752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071639383Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071651533Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071663841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071675833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071689522Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071699542Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:38:03.071717 containerd[1432]: time="2025-03-17T17:38:03.071709483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:38:03.073685 containerd[1432]: time="2025-03-17T17:38:03.072375223Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:38:03.073685 containerd[1432]: time="2025-03-17T17:38:03.072479092Z" level=info msg="Connect containerd service" Mar 17 17:38:03.073685 containerd[1432]: time="2025-03-17T17:38:03.072512821Z" level=info msg="using legacy CRI server" Mar 17 17:38:03.073685 containerd[1432]: time="2025-03-17T17:38:03.072520158Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:38:03.073685 containerd[1432]: time="2025-03-17T17:38:03.073238404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:38:03.074405 containerd[1432]: time="2025-03-17T17:38:03.074370863Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:38:03.075803 containerd[1432]: time="2025-03-17T17:38:03.074833401Z" level=info msg="Start subscribing containerd event" Mar 17 17:38:03.075803 containerd[1432]: time="2025-03-17T17:38:03.074882515Z" level=info msg="Start recovering state" Mar 17 17:38:03.075803 containerd[1432]: time="2025-03-17T17:38:03.074945081Z" level=info msg="Start event monitor" Mar 17 17:38:03.075803 containerd[1432]: time="2025-03-17T17:38:03.074956324Z" level=info msg="Start snapshots syncer" Mar 17 17:38:03.075803 containerd[1432]: time="2025-03-17T17:38:03.074966620Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:38:03.075803 containerd[1432]: time="2025-03-17T17:38:03.074976798Z" level=info msg="Start streaming server" Mar 17 17:38:03.075955 containerd[1432]: time="2025-03-17T17:38:03.075883018Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:38:03.075955 containerd[1432]: time="2025-03-17T17:38:03.075929370Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:38:03.076282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:38:03.078345 containerd[1432]: time="2025-03-17T17:38:03.077457778Z" level=info msg="containerd successfully booted in 0.041480s" Mar 17 17:38:03.077525 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:38:03.088569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:38:03.091471 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:38:03.094250 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:38:03.095354 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:38:03.172317 tar[1429]: linux-arm64/LICENSE Mar 17 17:38:03.172317 tar[1429]: linux-arm64/README.md Mar 17 17:38:03.186923 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:38:04.327807 systemd-networkd[1378]: eth0: Gained IPv6LL Mar 17 17:38:04.330418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:38:04.331816 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:38:04.346895 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:38:04.349127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:04.350965 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:38:04.364742 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:38:04.364966 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:38:04.367091 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:38:04.371817 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:38:04.818491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:04.819768 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:38:04.822231 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:04.823664 systemd[1]: Startup finished in 560ms (kernel) + 4.445s (initrd) + 3.712s (userspace) = 8.719s. Mar 17 17:38:05.232826 kubelet[1521]: E0317 17:38:05.232673 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:05.235345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:05.235492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:09.270343 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:38:09.271391 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Mar 17 17:38:09.363923 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:09.365567 sshd-session[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:09.383525 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:38:09.390936 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:38:09.393080 systemd-logind[1422]: New session 1 of user core. Mar 17 17:38:09.401663 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:38:09.404352 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:38:09.411828 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:38:09.480363 systemd[1538]: Queued start job for default target default.target. Mar 17 17:38:09.491507 systemd[1538]: Created slice app.slice - User Application Slice. Mar 17 17:38:09.491549 systemd[1538]: Reached target paths.target - Paths. Mar 17 17:38:09.491560 systemd[1538]: Reached target timers.target - Timers. Mar 17 17:38:09.492768 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:38:09.502132 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:38:09.502194 systemd[1538]: Reached target sockets.target - Sockets. Mar 17 17:38:09.502206 systemd[1538]: Reached target basic.target - Basic System. Mar 17 17:38:09.502241 systemd[1538]: Reached target default.target - Main User Target. Mar 17 17:38:09.502266 systemd[1538]: Startup finished in 84ms. Mar 17 17:38:09.502500 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:38:09.503870 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:38:09.566731 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:52572.service - OpenSSH per-connection server daemon (10.0.0.1:52572). Mar 17 17:38:09.607405 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 52572 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:09.608655 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:09.612741 systemd-logind[1422]: New session 2 of user core. Mar 17 17:38:09.625779 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:38:09.676089 sshd[1551]: Connection closed by 10.0.0.1 port 52572 Mar 17 17:38:09.676702 sshd-session[1549]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:09.685878 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:52572.service: Deactivated successfully. Mar 17 17:38:09.687298 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:38:09.689863 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:38:09.690880 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:52584.service - OpenSSH per-connection server daemon (10.0.0.1:52584). Mar 17 17:38:09.691605 systemd-logind[1422]: Removed session 2. Mar 17 17:38:09.727936 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 52584 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:09.729092 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:09.732740 systemd-logind[1422]: New session 3 of user core. Mar 17 17:38:09.743767 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:38:09.791155 sshd[1558]: Connection closed by 10.0.0.1 port 52584 Mar 17 17:38:09.791669 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:09.805051 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:52584.service: Deactivated successfully. Mar 17 17:38:09.806516 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:38:09.807809 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:38:09.809328 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:52596.service - OpenSSH per-connection server daemon (10.0.0.1:52596). Mar 17 17:38:09.810148 systemd-logind[1422]: Removed session 3. Mar 17 17:38:09.847893 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 52596 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:09.849456 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:09.853283 systemd-logind[1422]: New session 4 of user core. Mar 17 17:38:09.862754 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:38:09.914200 sshd[1565]: Connection closed by 10.0.0.1 port 52596 Mar 17 17:38:09.914700 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:09.923854 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:52596.service: Deactivated successfully. Mar 17 17:38:09.925257 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:38:09.927768 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:38:09.928878 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:52610.service - OpenSSH per-connection server daemon (10.0.0.1:52610). Mar 17 17:38:09.929685 systemd-logind[1422]: Removed session 4. Mar 17 17:38:09.966345 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 52610 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:09.967455 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:09.971276 systemd-logind[1422]: New session 5 of user core. Mar 17 17:38:09.980747 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:38:10.040799 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:38:10.041369 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:10.062397 sudo[1573]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:10.064371 sshd[1572]: Connection closed by 10.0.0.1 port 52610 Mar 17 17:38:10.064226 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:10.074012 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:52610.service: Deactivated successfully. Mar 17 17:38:10.075434 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:38:10.076699 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:38:10.085894 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:52616.service - OpenSSH per-connection server daemon (10.0.0.1:52616). Mar 17 17:38:10.087114 systemd-logind[1422]: Removed session 5. Mar 17 17:38:10.121585 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 52616 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:10.122990 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:10.126385 systemd-logind[1422]: New session 6 of user core. Mar 17 17:38:10.137765 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:38:10.187668 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:38:10.187930 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:10.190806 sudo[1582]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:10.195075 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:38:10.195584 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:10.211878 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:38:10.233286 augenrules[1604]: No rules Mar 17 17:38:10.234332 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:38:10.234504 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:38:10.235559 sudo[1581]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:10.236871 sshd[1580]: Connection closed by 10.0.0.1 port 52616 Mar 17 17:38:10.237210 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:10.246845 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:52616.service: Deactivated successfully. Mar 17 17:38:10.248175 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:38:10.249364 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:38:10.250443 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:52628.service - OpenSSH per-connection server daemon (10.0.0.1:52628). Mar 17 17:38:10.251161 systemd-logind[1422]: Removed session 6. Mar 17 17:38:10.287175 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 52628 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:38:10.288226 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:10.292056 systemd-logind[1422]: New session 7 of user core. Mar 17 17:38:10.300766 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:38:10.351517 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:38:10.351818 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:10.659962 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:38:10.660003 (dockerd)[1636]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:38:10.907082 dockerd[1636]: time="2025-03-17T17:38:10.907020982Z" level=info msg="Starting up" Mar 17 17:38:11.042848 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport358291937-merged.mount: Deactivated successfully. Mar 17 17:38:11.060511 dockerd[1636]: time="2025-03-17T17:38:11.060231091Z" level=info msg="Loading containers: start." Mar 17 17:38:11.188642 kernel: Initializing XFRM netlink socket Mar 17 17:38:11.252017 systemd-networkd[1378]: docker0: Link UP Mar 17 17:38:11.299019 dockerd[1636]: time="2025-03-17T17:38:11.298874137Z" level=info msg="Loading containers: done." Mar 17 17:38:11.313716 dockerd[1636]: time="2025-03-17T17:38:11.313664265Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:38:11.313853 dockerd[1636]: time="2025-03-17T17:38:11.313760328Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:38:11.313877 dockerd[1636]: time="2025-03-17T17:38:11.313864751Z" level=info msg="Daemon has completed initialization" Mar 17 17:38:11.377919 dockerd[1636]: time="2025-03-17T17:38:11.377855486Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:38:11.378102 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:38:11.951188 containerd[1432]: time="2025-03-17T17:38:11.951138430Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:38:12.502351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339675830.mount: Deactivated successfully. Mar 17 17:38:13.711520 containerd[1432]: time="2025-03-17T17:38:13.711453075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:13.711900 containerd[1432]: time="2025-03-17T17:38:13.711843614Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 17 17:38:13.712772 containerd[1432]: time="2025-03-17T17:38:13.712742106Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:13.715671 containerd[1432]: time="2025-03-17T17:38:13.715640123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:13.717836 containerd[1432]: time="2025-03-17T17:38:13.717806482Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 1.766624092s" Mar 17 17:38:13.717836 containerd[1432]: time="2025-03-17T17:38:13.717841555Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 17:38:13.718433 containerd[1432]: time="2025-03-17T17:38:13.718403032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:38:14.933809 containerd[1432]: time="2025-03-17T17:38:14.933763104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:14.934504 containerd[1432]: time="2025-03-17T17:38:14.934467144Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 17 17:38:14.935672 containerd[1432]: time="2025-03-17T17:38:14.935447864Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:14.939125 containerd[1432]: time="2025-03-17T17:38:14.939085491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:14.940149 containerd[1432]: time="2025-03-17T17:38:14.940114497Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.221676547s" Mar 17 17:38:14.940187 containerd[1432]: time="2025-03-17T17:38:14.940150462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 17:38:14.940589 containerd[1432]: time="2025-03-17T17:38:14.940562432Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:38:15.485791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:38:15.496835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:15.589499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:15.593205 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:15.631226 kubelet[1901]: E0317 17:38:15.631181 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:15.633771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:15.633896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:16.307657 containerd[1432]: time="2025-03-17T17:38:16.307593130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:16.308634 containerd[1432]: time="2025-03-17T17:38:16.307930429Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 17 17:38:16.309144 containerd[1432]: time="2025-03-17T17:38:16.309115464Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:16.312050 containerd[1432]: time="2025-03-17T17:38:16.312002956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:16.314153 containerd[1432]: time="2025-03-17T17:38:16.314127704Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.37353257s" Mar 17 17:38:16.314218 containerd[1432]: time="2025-03-17T17:38:16.314158948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 17:38:16.314701 containerd[1432]: time="2025-03-17T17:38:16.314675490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:38:17.293687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954583056.mount: Deactivated successfully. Mar 17 17:38:17.519949 containerd[1432]: time="2025-03-17T17:38:17.519901900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:17.520886 containerd[1432]: time="2025-03-17T17:38:17.520752609Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 17 17:38:17.521645 containerd[1432]: time="2025-03-17T17:38:17.521501773Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:17.523468 containerd[1432]: time="2025-03-17T17:38:17.523440166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:17.524887 containerd[1432]: time="2025-03-17T17:38:17.524853876Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.210133608s" Mar 17 17:38:17.524953 containerd[1432]: time="2025-03-17T17:38:17.524887844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 17:38:17.525327 containerd[1432]: time="2025-03-17T17:38:17.525301723Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:38:18.072333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275956583.mount: Deactivated successfully. Mar 17 17:38:18.677127 containerd[1432]: time="2025-03-17T17:38:18.677081444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:18.678076 containerd[1432]: time="2025-03-17T17:38:18.677986678Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 17 17:38:18.678694 containerd[1432]: time="2025-03-17T17:38:18.678662420Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:18.682487 containerd[1432]: time="2025-03-17T17:38:18.682012700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:18.687738 containerd[1432]: time="2025-03-17T17:38:18.687694397Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.162359263s" Mar 17 17:38:18.687738 containerd[1432]: time="2025-03-17T17:38:18.687733325Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:38:18.688207 containerd[1432]: time="2025-03-17T17:38:18.688173146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:38:19.125665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324469639.mount: Deactivated successfully. Mar 17 17:38:19.130229 containerd[1432]: time="2025-03-17T17:38:19.130178373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:19.130573 containerd[1432]: time="2025-03-17T17:38:19.130527804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 17 17:38:19.131467 containerd[1432]: time="2025-03-17T17:38:19.131431212Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:19.133433 containerd[1432]: time="2025-03-17T17:38:19.133395531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:19.134286 containerd[1432]: time="2025-03-17T17:38:19.134254132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 445.951386ms" Mar 17 17:38:19.134286 containerd[1432]: time="2025-03-17T17:38:19.134281688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:38:19.135360 containerd[1432]: time="2025-03-17T17:38:19.135335331Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:38:19.596565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070938880.mount: Deactivated successfully. Mar 17 17:38:21.547591 containerd[1432]: time="2025-03-17T17:38:21.546352881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:21.547591 containerd[1432]: time="2025-03-17T17:38:21.547535366Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 17 17:38:21.548134 containerd[1432]: time="2025-03-17T17:38:21.548102858Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:21.552427 containerd[1432]: time="2025-03-17T17:38:21.552384399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:21.554010 containerd[1432]: time="2025-03-17T17:38:21.553345681Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.417971129s" Mar 17 17:38:21.554686 containerd[1432]: time="2025-03-17T17:38:21.554662479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 17:38:25.885143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:38:25.894775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:26.018954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:26.022673 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:26.059265 kubelet[2054]: E0317 17:38:26.059198 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:26.061075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:26.061197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:26.639678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:26.650995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:26.674441 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-7.scope)... Mar 17 17:38:26.674456 systemd[1]: Reloading... Mar 17 17:38:26.739650 zram_generator::config[2109]: No configuration found. Mar 17 17:38:26.863002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:26.919135 systemd[1]: Reloading finished in 244 ms. Mar 17 17:38:26.959017 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:26.961634 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:38:26.961812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:26.963202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:27.054791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:27.060449 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:38:27.092527 kubelet[2156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:27.092527 kubelet[2156]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:38:27.092527 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:27.092848 kubelet[2156]: I0317 17:38:27.092727 2156 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:38:27.974156 kubelet[2156]: I0317 17:38:27.974111 2156 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:38:27.974156 kubelet[2156]: I0317 17:38:27.974145 2156 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:38:27.974393 kubelet[2156]: I0317 17:38:27.974372 2156 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:38:28.000906 kubelet[2156]: E0317 17:38:28.000876 2156 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:28.001588 kubelet[2156]: I0317 17:38:28.001577 2156 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:38:28.010266 kubelet[2156]: E0317 17:38:28.010232 2156 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:38:28.010266 kubelet[2156]: I0317 17:38:28.010266 2156 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:38:28.013660 kubelet[2156]: I0317 17:38:28.013632 2156 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:38:28.013934 kubelet[2156]: I0317 17:38:28.013917 2156 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:38:28.014044 kubelet[2156]: I0317 17:38:28.014015 2156 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:38:28.014204 kubelet[2156]: I0317 17:38:28.014041 2156 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:38:28.014348 kubelet[2156]: I0317 17:38:28.014337 2156 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:38:28.014370 kubelet[2156]: I0317 17:38:28.014350 2156 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:38:28.014539 kubelet[2156]: I0317 17:38:28.014520 2156 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:28.016129 kubelet[2156]: I0317 17:38:28.016103 2156 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:38:28.016129 kubelet[2156]: I0317 17:38:28.016129 2156 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:38:28.016235 kubelet[2156]: I0317 17:38:28.016216 2156 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:38:28.016235 kubelet[2156]: I0317 17:38:28.016229 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:38:28.018446 kubelet[2156]: I0317 17:38:28.018044 2156 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:38:28.019915 kubelet[2156]: I0317 17:38:28.019894 2156 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:38:28.020846 kubelet[2156]: W0317 17:38:28.020667 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:28.020846 kubelet[2156]: E0317 17:38:28.020723 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:28.021232 kubelet[2156]: W0317 17:38:28.021193 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:28.021337 kubelet[2156]: E0317 17:38:28.021319 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:28.024106 kubelet[2156]: W0317 17:38:28.024082 2156 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:38:28.026635 kubelet[2156]: I0317 17:38:28.024765 2156 server.go:1269] "Started kubelet" Mar 17 17:38:28.026635 kubelet[2156]: I0317 17:38:28.024992 2156 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:38:28.026635 kubelet[2156]: I0317 17:38:28.025562 2156 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:38:28.026635 kubelet[2156]: I0317 17:38:28.025821 2156 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:38:28.026635 kubelet[2156]: I0317 17:38:28.026301 2156 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:38:28.027661 kubelet[2156]: I0317 17:38:28.026944 2156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:38:28.027661 kubelet[2156]: I0317 17:38:28.027474 2156 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:38:28.028691 kubelet[2156]: E0317 17:38:28.028673 2156 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:38:28.028852 kubelet[2156]: I0317 17:38:28.028842 2156 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:38:28.029005 kubelet[2156]: I0317 17:38:28.028979 2156 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:38:28.029161 kubelet[2156]: I0317 17:38:28.029148 2156 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:38:28.029309 kubelet[2156]: W0317 17:38:28.029269 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:28.029346 kubelet[2156]: E0317 17:38:28.029310 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:28.029532 kubelet[2156]: E0317 17:38:28.029489 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Mar 17 17:38:28.030070 kubelet[2156]: I0317 17:38:28.029650 2156 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:38:28.030070 kubelet[2156]: I0317 17:38:28.029731 2156 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:38:28.030230 kubelet[2156]: E0317 17:38:28.030207 2156 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:38:28.030256 kubelet[2156]: E0317 17:38:28.028861 2156 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7c7efbd43c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:38:28.02473876 +0000 UTC m=+0.961328702,LastTimestamp:2025-03-17 17:38:28.02473876 +0000 UTC m=+0.961328702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:38:28.031299 kubelet[2156]: I0317 17:38:28.031276 2156 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:38:28.043001 kubelet[2156]: I0317 17:38:28.042982 2156 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:38:28.043001 kubelet[2156]: I0317 17:38:28.042997 2156 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:38:28.043102 kubelet[2156]: I0317 17:38:28.043012 2156 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:28.046600 kubelet[2156]: I0317 17:38:28.046560 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:38:28.047602 kubelet[2156]: I0317 17:38:28.047570 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:38:28.047602 kubelet[2156]: I0317 17:38:28.047595 2156 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:38:28.047691 kubelet[2156]: I0317 17:38:28.047614 2156 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:38:28.047888 kubelet[2156]: E0317 17:38:28.047778 2156 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:38:28.048372 kubelet[2156]: W0317 17:38:28.048312 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:28.048372 kubelet[2156]: E0317 17:38:28.048366 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:28.129522 kubelet[2156]: E0317 17:38:28.129473 2156 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:38:28.148741 kubelet[2156]: E0317 17:38:28.148703 2156 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:38:28.197759 kubelet[2156]: I0317 17:38:28.197724 2156 policy_none.go:49] "None policy: Start" Mar 17 17:38:28.198543 kubelet[2156]: I0317 17:38:28.198519 2156 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:38:28.198606 kubelet[2156]: I0317 17:38:28.198548 2156 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:38:28.204265 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:38:28.218247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:38:28.220785 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:38:28.230050 kubelet[2156]: E0317 17:38:28.229976 2156 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:38:28.230305 kubelet[2156]: E0317 17:38:28.230268 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Mar 17 17:38:28.232003 kubelet[2156]: I0317 17:38:28.231969 2156 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:38:28.232181 kubelet[2156]: I0317 17:38:28.232161 2156 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:38:28.232205 kubelet[2156]: I0317 17:38:28.232178 2156 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:38:28.233099 kubelet[2156]: I0317 17:38:28.233076 2156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:38:28.233807 kubelet[2156]: E0317 17:38:28.233774 2156 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:38:28.333937 kubelet[2156]: I0317 17:38:28.333895 2156 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:38:28.334372 kubelet[2156]: E0317 17:38:28.334345 2156 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Mar 17 17:38:28.356339 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 17:38:28.382632 systemd[1]: Created slice kubepods-burstable-pod939aad522057d33f8280a69aef0be121.slice - libcontainer container kubepods-burstable-pod939aad522057d33f8280a69aef0be121.slice. Mar 17 17:38:28.386827 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 17:38:28.431099 kubelet[2156]: I0317 17:38:28.431078 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:28.431159 kubelet[2156]: I0317 17:38:28.431139 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:28.431195 kubelet[2156]: I0317 17:38:28.431183 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:28.431224 kubelet[2156]: I0317 17:38:28.431210 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:28.431310 kubelet[2156]: I0317 17:38:28.431286 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:28.431345 kubelet[2156]: I0317 17:38:28.431318 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:38:28.431372 kubelet[2156]: I0317 17:38:28.431361 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:28.431393 kubelet[2156]: I0317 17:38:28.431380 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:28.431432 kubelet[2156]: I0317 17:38:28.431395 2156 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:28.536219 kubelet[2156]: I0317 17:38:28.536121 2156 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:38:28.536631 kubelet[2156]: E0317 17:38:28.536591 2156 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Mar 17 17:38:28.631181 kubelet[2156]: E0317 17:38:28.631129 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Mar 17 17:38:28.681431 kubelet[2156]: E0317 17:38:28.681404 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:28.682103 containerd[1432]: time="2025-03-17T17:38:28.682062357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:28.685148 kubelet[2156]: E0317 17:38:28.685115 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:28.685655 containerd[1432]: time="2025-03-17T17:38:28.685596308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:939aad522057d33f8280a69aef0be121,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:28.688752 kubelet[2156]: E0317 17:38:28.688723 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:28.689051 containerd[1432]: time="2025-03-17T17:38:28.689024191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:28.937996 kubelet[2156]: I0317 17:38:28.937909 2156 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:38:28.938212 kubelet[2156]: E0317 17:38:28.938177 2156 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Mar 17 17:38:29.007995 kubelet[2156]: W0317 17:38:29.007927 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:29.008074 kubelet[2156]: E0317 17:38:29.007998 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:29.091545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731956015.mount: Deactivated successfully. Mar 17 17:38:29.096586 containerd[1432]: time="2025-03-17T17:38:29.096534535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:29.098536 containerd[1432]: time="2025-03-17T17:38:29.098402816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:38:29.100008 containerd[1432]: time="2025-03-17T17:38:29.099965427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:29.101184 containerd[1432]: time="2025-03-17T17:38:29.101154278Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:29.102211 containerd[1432]: time="2025-03-17T17:38:29.102080401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:38:29.103693 containerd[1432]: time="2025-03-17T17:38:29.103445137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:38:29.104392 containerd[1432]: time="2025-03-17T17:38:29.104367902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:29.105647 containerd[1432]: time="2025-03-17T17:38:29.105230213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 423.089813ms" Mar 17 17:38:29.109453 kubelet[2156]: W0317 17:38:29.107604 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:29.109453 kubelet[2156]: E0317 17:38:29.107709 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:29.109538 containerd[1432]: time="2025-03-17T17:38:29.109503703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 423.81372ms" Mar 17 17:38:29.110310 containerd[1432]: time="2025-03-17T17:38:29.110276652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:29.115687 containerd[1432]: time="2025-03-17T17:38:29.114161309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 425.083264ms" Mar 17 17:38:29.219966 kubelet[2156]: W0317 17:38:29.219788 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:29.219966 kubelet[2156]: E0317 17:38:29.219860 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:29.227093 containerd[1432]: time="2025-03-17T17:38:29.227011274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:29.227225 containerd[1432]: time="2025-03-17T17:38:29.227073408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:29.227225 containerd[1432]: time="2025-03-17T17:38:29.227085483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:29.227225 containerd[1432]: time="2025-03-17T17:38:29.227161210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:29.228157 containerd[1432]: time="2025-03-17T17:38:29.228078857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:29.228298 containerd[1432]: time="2025-03-17T17:38:29.228171298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:29.228298 containerd[1432]: time="2025-03-17T17:38:29.228188171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:29.228298 containerd[1432]: time="2025-03-17T17:38:29.228270295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:29.229173 containerd[1432]: time="2025-03-17T17:38:29.228879754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:29.229173 containerd[1432]: time="2025-03-17T17:38:29.228920817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:29.229173 containerd[1432]: time="2025-03-17T17:38:29.228969236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:29.229173 containerd[1432]: time="2025-03-17T17:38:29.229083867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:29.244771 systemd[1]: Started cri-containerd-797132c8998768423fb2838e2b8e1d920e0d2cc200cb45fc54ab0a1947b240ec.scope - libcontainer container 797132c8998768423fb2838e2b8e1d920e0d2cc200cb45fc54ab0a1947b240ec. Mar 17 17:38:29.248980 kubelet[2156]: W0317 17:38:29.248780 2156 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Mar 17 17:38:29.248980 kubelet[2156]: E0317 17:38:29.248821 2156 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:38:29.249008 systemd[1]: Started cri-containerd-36ce83f543c00ca21026bcb04686ff567493c4ddeae114c092e61bc741237f76.scope - libcontainer container 36ce83f543c00ca21026bcb04686ff567493c4ddeae114c092e61bc741237f76. Mar 17 17:38:29.249999 systemd[1]: Started cri-containerd-3c3385b56584c929f797d89dffb3b2c07bdbdb41a7aa91b0f690f33dda4b4f3e.scope - libcontainer container 3c3385b56584c929f797d89dffb3b2c07bdbdb41a7aa91b0f690f33dda4b4f3e. Mar 17 17:38:29.282881 containerd[1432]: time="2025-03-17T17:38:29.282822580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:939aad522057d33f8280a69aef0be121,Namespace:kube-system,Attempt:0,} returns sandbox id \"797132c8998768423fb2838e2b8e1d920e0d2cc200cb45fc54ab0a1947b240ec\"" Mar 17 17:38:29.284160 kubelet[2156]: E0317 17:38:29.283983 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:29.285930 containerd[1432]: time="2025-03-17T17:38:29.285866797Z" level=info msg="CreateContainer within sandbox \"797132c8998768423fb2838e2b8e1d920e0d2cc200cb45fc54ab0a1947b240ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:38:29.288642 containerd[1432]: time="2025-03-17T17:38:29.288604185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"36ce83f543c00ca21026bcb04686ff567493c4ddeae114c092e61bc741237f76\"" Mar 17 17:38:29.289670 kubelet[2156]: E0317 17:38:29.289615 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:29.291826 containerd[1432]: time="2025-03-17T17:38:29.291772268Z" level=info msg="CreateContainer within sandbox \"36ce83f543c00ca21026bcb04686ff567493c4ddeae114c092e61bc741237f76\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:38:29.293864 containerd[1432]: time="2025-03-17T17:38:29.293797121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c3385b56584c929f797d89dffb3b2c07bdbdb41a7aa91b0f690f33dda4b4f3e\"" Mar 17 17:38:29.294469 kubelet[2156]: E0317 17:38:29.294448 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:29.296087 containerd[1432]: time="2025-03-17T17:38:29.296061992Z" level=info msg="CreateContainer within sandbox \"3c3385b56584c929f797d89dffb3b2c07bdbdb41a7aa91b0f690f33dda4b4f3e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:38:29.303547 containerd[1432]: time="2025-03-17T17:38:29.303510843Z" level=info msg="CreateContainer within sandbox \"797132c8998768423fb2838e2b8e1d920e0d2cc200cb45fc54ab0a1947b240ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b58c866f4ecc5ca64c6b7e2073e7a1446bb729c2c859286ff5ff716d18ae7ed9\"" Mar 17 17:38:29.304128 containerd[1432]: time="2025-03-17T17:38:29.304071123Z" level=info msg="StartContainer for \"b58c866f4ecc5ca64c6b7e2073e7a1446bb729c2c859286ff5ff716d18ae7ed9\"" Mar 17 17:38:29.305787 containerd[1432]: time="2025-03-17T17:38:29.305745006Z" level=info msg="CreateContainer within sandbox \"36ce83f543c00ca21026bcb04686ff567493c4ddeae114c092e61bc741237f76\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fca76f356f5a43c701c5d92d1e83576174256bf4ac93ed8331e17fd136981be\"" Mar 17 17:38:29.306268 containerd[1432]: time="2025-03-17T17:38:29.306240394Z" level=info msg="StartContainer for \"1fca76f356f5a43c701c5d92d1e83576174256bf4ac93ed8331e17fd136981be\"" Mar 17 17:38:29.311198 containerd[1432]: time="2025-03-17T17:38:29.311163526Z" level=info msg="CreateContainer within sandbox \"3c3385b56584c929f797d89dffb3b2c07bdbdb41a7aa91b0f690f33dda4b4f3e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea34eb24eb03cb38da5db4cf7e26d85fdb93ce3681f0666d6d2bc926ba1c629c\"" Mar 17 17:38:29.312571 containerd[1432]: time="2025-03-17T17:38:29.311876821Z" level=info msg="StartContainer for \"ea34eb24eb03cb38da5db4cf7e26d85fdb93ce3681f0666d6d2bc926ba1c629c\"" Mar 17 17:38:29.333768 systemd[1]: Started cri-containerd-1fca76f356f5a43c701c5d92d1e83576174256bf4ac93ed8331e17fd136981be.scope - libcontainer container 1fca76f356f5a43c701c5d92d1e83576174256bf4ac93ed8331e17fd136981be. Mar 17 17:38:29.334714 systemd[1]: Started cri-containerd-b58c866f4ecc5ca64c6b7e2073e7a1446bb729c2c859286ff5ff716d18ae7ed9.scope - libcontainer container b58c866f4ecc5ca64c6b7e2073e7a1446bb729c2c859286ff5ff716d18ae7ed9. Mar 17 17:38:29.338079 systemd[1]: Started cri-containerd-ea34eb24eb03cb38da5db4cf7e26d85fdb93ce3681f0666d6d2bc926ba1c629c.scope - libcontainer container ea34eb24eb03cb38da5db4cf7e26d85fdb93ce3681f0666d6d2bc926ba1c629c. Mar 17 17:38:29.367749 containerd[1432]: time="2025-03-17T17:38:29.367709437Z" level=info msg="StartContainer for \"1fca76f356f5a43c701c5d92d1e83576174256bf4ac93ed8331e17fd136981be\" returns successfully" Mar 17 17:38:29.399771 containerd[1432]: time="2025-03-17T17:38:29.398665384Z" level=info msg="StartContainer for \"ea34eb24eb03cb38da5db4cf7e26d85fdb93ce3681f0666d6d2bc926ba1c629c\" returns successfully" Mar 17 17:38:29.399771 containerd[1432]: time="2025-03-17T17:38:29.398801126Z" level=info msg="StartContainer for \"b58c866f4ecc5ca64c6b7e2073e7a1446bb729c2c859286ff5ff716d18ae7ed9\" returns successfully" Mar 17 17:38:29.435004 kubelet[2156]: E0317 17:38:29.431805 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Mar 17 17:38:29.739910 kubelet[2156]: I0317 17:38:29.739880 2156 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:38:30.054440 kubelet[2156]: E0317 17:38:30.054245 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:30.055954 kubelet[2156]: E0317 17:38:30.055929 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:30.057854 kubelet[2156]: E0317 17:38:30.057798 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:31.061476 kubelet[2156]: E0317 17:38:31.061444 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:31.732297 kubelet[2156]: E0317 17:38:31.732255 2156 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:38:31.793985 kubelet[2156]: I0317 17:38:31.793954 2156 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:38:31.793985 kubelet[2156]: E0317 17:38:31.793988 2156 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:38:32.020515 kubelet[2156]: I0317 17:38:32.020378 2156 apiserver.go:52] "Watching apiserver" Mar 17 17:38:32.029979 kubelet[2156]: I0317 17:38:32.029943 2156 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:38:33.510601 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-7.scope)... Mar 17 17:38:33.510615 systemd[1]: Reloading... Mar 17 17:38:33.573978 zram_generator::config[2476]: No configuration found. Mar 17 17:38:33.652202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:33.714814 systemd[1]: Reloading finished in 203 ms. Mar 17 17:38:33.749120 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:33.761834 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:38:33.762221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:33.762390 systemd[1]: kubelet.service: Consumed 1.304s CPU time, 117.2M memory peak, 0B memory swap peak. Mar 17 17:38:33.780839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:33.868503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:33.873148 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:38:33.910830 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:33.910830 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:38:33.910830 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:33.911184 kubelet[2512]: I0317 17:38:33.910867 2512 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:38:33.916360 kubelet[2512]: I0317 17:38:33.916319 2512 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:38:33.916360 kubelet[2512]: I0317 17:38:33.916351 2512 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:38:33.916572 kubelet[2512]: I0317 17:38:33.916550 2512 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:38:33.917962 kubelet[2512]: I0317 17:38:33.917940 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:38:33.919934 kubelet[2512]: I0317 17:38:33.919908 2512 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:38:33.927194 kubelet[2512]: E0317 17:38:33.927141 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:38:33.927194 kubelet[2512]: I0317 17:38:33.927191 2512 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:38:33.930067 kubelet[2512]: I0317 17:38:33.930049 2512 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:38:33.930171 kubelet[2512]: I0317 17:38:33.930159 2512 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:38:33.930278 kubelet[2512]: I0317 17:38:33.930254 2512 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:38:33.930447 kubelet[2512]: I0317 17:38:33.930279 2512 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:38:33.930518 kubelet[2512]: I0317 17:38:33.930457 2512 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:38:33.930518 kubelet[2512]: I0317 17:38:33.930467 2512 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:38:33.930518 kubelet[2512]: I0317 17:38:33.930493 2512 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:33.930600 kubelet[2512]: I0317 17:38:33.930587 2512 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:38:33.930636 kubelet[2512]: I0317 17:38:33.930602 2512 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:38:33.932016 kubelet[2512]: I0317 17:38:33.931993 2512 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:38:33.932016 kubelet[2512]: I0317 17:38:33.932017 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:38:33.934654 kubelet[2512]: I0317 17:38:33.932545 2512 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:38:33.934654 kubelet[2512]: I0317 17:38:33.933004 2512 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:38:33.934654 kubelet[2512]: I0317 17:38:33.933394 2512 server.go:1269] "Started kubelet" Mar 17 17:38:33.934936 kubelet[2512]: I0317 17:38:33.934756 2512 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:38:33.935685 kubelet[2512]: I0317 17:38:33.935236 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:38:33.935808 kubelet[2512]: I0317 17:38:33.935790 2512 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:38:33.935885 kubelet[2512]: I0317 17:38:33.935868 2512 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:38:33.939627 kubelet[2512]: I0317 17:38:33.939598 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:38:33.939696 kubelet[2512]: I0317 17:38:33.939616 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:38:33.939949 kubelet[2512]: I0317 17:38:33.939922 2512 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:38:33.940285 kubelet[2512]: I0317 17:38:33.940262 2512 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:38:33.941032 kubelet[2512]: I0317 17:38:33.941008 2512 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:38:33.941270 kubelet[2512]: E0317 17:38:33.941245 2512 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:38:33.956647 kubelet[2512]: I0317 17:38:33.952650 2512 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:38:33.956647 kubelet[2512]: I0317 17:38:33.952763 2512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:38:33.959806 kubelet[2512]: I0317 17:38:33.958972 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:38:33.964399 kubelet[2512]: I0317 17:38:33.964335 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:38:33.964480 kubelet[2512]: I0317 17:38:33.964367 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:38:33.964480 kubelet[2512]: I0317 17:38:33.964441 2512 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:38:33.965151 kubelet[2512]: E0317 17:38:33.964504 2512 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:38:33.965529 kubelet[2512]: I0317 17:38:33.965269 2512 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:38:33.966342 kubelet[2512]: E0317 17:38:33.966319 2512 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:38:33.995081 kubelet[2512]: I0317 17:38:33.995049 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:38:33.995467 kubelet[2512]: I0317 17:38:33.995207 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:38:33.995467 kubelet[2512]: I0317 17:38:33.995230 2512 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:33.995467 kubelet[2512]: I0317 17:38:33.995358 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:38:33.995467 kubelet[2512]: I0317 17:38:33.995373 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:38:33.995467 kubelet[2512]: I0317 17:38:33.995390 2512 policy_none.go:49] "None policy: Start" Mar 17 17:38:33.995987 kubelet[2512]: I0317 17:38:33.995973 2512 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:38:33.996020 kubelet[2512]: I0317 17:38:33.995995 2512 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:38:33.996178 kubelet[2512]: I0317 17:38:33.996163 2512 state_mem.go:75] "Updated machine memory state" Mar 17 17:38:33.999737 kubelet[2512]: I0317 17:38:33.999710 2512 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:38:33.999884 kubelet[2512]: I0317 17:38:33.999862 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:38:33.999912 kubelet[2512]: I0317 17:38:33.999879 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:38:34.000421 kubelet[2512]: I0317 17:38:34.000371 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:38:34.104769 kubelet[2512]: I0317 17:38:34.103542 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:38:34.111893 kubelet[2512]: I0317 17:38:34.111858 2512 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 17:38:34.111988 kubelet[2512]: I0317 17:38:34.111936 2512 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:38:34.242204 kubelet[2512]: I0317 17:38:34.242152 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:34.242204 kubelet[2512]: I0317 17:38:34.242206 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:34.242366 kubelet[2512]: I0317 17:38:34.242229 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:34.242366 kubelet[2512]: I0317 17:38:34.242245 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:34.242366 kubelet[2512]: I0317 17:38:34.242282 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:34.242366 kubelet[2512]: I0317 17:38:34.242299 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:38:34.242366 kubelet[2512]: I0317 17:38:34.242315 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/939aad522057d33f8280a69aef0be121-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"939aad522057d33f8280a69aef0be121\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:34.242484 kubelet[2512]: I0317 17:38:34.242354 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:34.242484 kubelet[2512]: I0317 17:38:34.242370 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:38:34.373346 kubelet[2512]: E0317 17:38:34.373245 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:34.373346 kubelet[2512]: E0317 17:38:34.373263 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:34.373346 kubelet[2512]: E0317 17:38:34.373245 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:34.932971 kubelet[2512]: I0317 17:38:34.932925 2512 apiserver.go:52] "Watching apiserver" Mar 17 17:38:34.941148 kubelet[2512]: I0317 17:38:34.941077 2512 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:38:34.979211 kubelet[2512]: E0317 17:38:34.979166 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:34.980085 kubelet[2512]: E0317 17:38:34.979972 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:34.987004 kubelet[2512]: E0317 17:38:34.986926 2512 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:38:34.987082 kubelet[2512]: E0317 17:38:34.987068 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:35.013025 kubelet[2512]: I0317 17:38:35.012851 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.01283287 podStartE2EDuration="1.01283287s" podCreationTimestamp="2025-03-17 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:35.004806444 +0000 UTC m=+1.128774103" watchObservedRunningTime="2025-03-17 17:38:35.01283287 +0000 UTC m=+1.136800529" Mar 17 17:38:35.025642 kubelet[2512]: I0317 17:38:35.025434 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.025394716 podStartE2EDuration="1.025394716s" podCreationTimestamp="2025-03-17 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:35.013294263 +0000 UTC m=+1.137261882" watchObservedRunningTime="2025-03-17 17:38:35.025394716 +0000 UTC m=+1.149362375" Mar 17 17:38:35.043798 kubelet[2512]: I0317 17:38:35.043720 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.043659584 podStartE2EDuration="1.043659584s" podCreationTimestamp="2025-03-17 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:35.028177514 +0000 UTC m=+1.152145173" watchObservedRunningTime="2025-03-17 17:38:35.043659584 +0000 UTC m=+1.167627243" Mar 17 17:38:35.980959 kubelet[2512]: E0317 17:38:35.980351 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:35.980959 kubelet[2512]: E0317 17:38:35.980602 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:36.987585 kubelet[2512]: E0317 17:38:36.987554 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:38.552454 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:38.553582 sshd[1614]: Connection closed by 10.0.0.1 port 52628 Mar 17 17:38:38.556708 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:38.560945 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:38:38.561192 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:52628.service: Deactivated successfully. Mar 17 17:38:38.562841 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:38:38.562995 systemd[1]: session-7.scope: Consumed 6.968s CPU time, 155.7M memory peak, 0B memory swap peak. Mar 17 17:38:38.564245 systemd-logind[1422]: Removed session 7. Mar 17 17:38:39.640638 kubelet[2512]: E0317 17:38:39.640595 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:39.755858 kubelet[2512]: I0317 17:38:39.755819 2512 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:38:39.762573 containerd[1432]: time="2025-03-17T17:38:39.762476457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:38:39.763255 kubelet[2512]: I0317 17:38:39.763236 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:38:39.988008 kubelet[2512]: E0317 17:38:39.985308 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:40.541657 systemd[1]: Created slice kubepods-besteffort-poddd985044_40dc_4759_9c55_367e86f0c979.slice - libcontainer container kubepods-besteffort-poddd985044_40dc_4759_9c55_367e86f0c979.slice. Mar 17 17:38:40.583356 kubelet[2512]: I0317 17:38:40.583327 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd985044-40dc-4759-9c55-367e86f0c979-xtables-lock\") pod \"kube-proxy-2v9t6\" (UID: \"dd985044-40dc-4759-9c55-367e86f0c979\") " pod="kube-system/kube-proxy-2v9t6" Mar 17 17:38:40.583356 kubelet[2512]: I0317 17:38:40.583358 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd985044-40dc-4759-9c55-367e86f0c979-lib-modules\") pod \"kube-proxy-2v9t6\" (UID: \"dd985044-40dc-4759-9c55-367e86f0c979\") " pod="kube-system/kube-proxy-2v9t6" Mar 17 17:38:40.583518 kubelet[2512]: I0317 17:38:40.583379 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cv8x\" (UniqueName: \"kubernetes.io/projected/dd985044-40dc-4759-9c55-367e86f0c979-kube-api-access-4cv8x\") pod \"kube-proxy-2v9t6\" (UID: \"dd985044-40dc-4759-9c55-367e86f0c979\") " pod="kube-system/kube-proxy-2v9t6" Mar 17 17:38:40.583518 kubelet[2512]: I0317 17:38:40.583398 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd985044-40dc-4759-9c55-367e86f0c979-kube-proxy\") pod \"kube-proxy-2v9t6\" (UID: \"dd985044-40dc-4759-9c55-367e86f0c979\") " pod="kube-system/kube-proxy-2v9t6" Mar 17 17:38:40.848191 systemd[1]: Created slice kubepods-besteffort-podde4386cb_5a6d_4207_bf76_7b73757176dc.slice - libcontainer container kubepods-besteffort-podde4386cb_5a6d_4207_bf76_7b73757176dc.slice. Mar 17 17:38:40.854320 kubelet[2512]: E0317 17:38:40.854290 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:40.856837 containerd[1432]: time="2025-03-17T17:38:40.856796055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v9t6,Uid:dd985044-40dc-4759-9c55-367e86f0c979,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:40.874684 containerd[1432]: time="2025-03-17T17:38:40.874527258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:40.875241 containerd[1432]: time="2025-03-17T17:38:40.875117533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:40.875241 containerd[1432]: time="2025-03-17T17:38:40.875149730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:40.875458 containerd[1432]: time="2025-03-17T17:38:40.875378033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:40.884747 kubelet[2512]: I0317 17:38:40.884705 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dh59\" (UniqueName: \"kubernetes.io/projected/de4386cb-5a6d-4207-bf76-7b73757176dc-kube-api-access-2dh59\") pod \"tigera-operator-64ff5465b7-mpk5r\" (UID: \"de4386cb-5a6d-4207-bf76-7b73757176dc\") " pod="tigera-operator/tigera-operator-64ff5465b7-mpk5r" Mar 17 17:38:40.884747 kubelet[2512]: I0317 17:38:40.884749 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de4386cb-5a6d-4207-bf76-7b73757176dc-var-lib-calico\") pod \"tigera-operator-64ff5465b7-mpk5r\" (UID: \"de4386cb-5a6d-4207-bf76-7b73757176dc\") " pod="tigera-operator/tigera-operator-64ff5465b7-mpk5r" Mar 17 17:38:40.891804 systemd[1]: Started cri-containerd-8d2a8dd2e7834456d493b7a6fa202c924d86b092753cd7a5f4c1c7bb8b9f20bc.scope - libcontainer container 8d2a8dd2e7834456d493b7a6fa202c924d86b092753cd7a5f4c1c7bb8b9f20bc. Mar 17 17:38:40.911201 containerd[1432]: time="2025-03-17T17:38:40.911138656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v9t6,Uid:dd985044-40dc-4759-9c55-367e86f0c979,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d2a8dd2e7834456d493b7a6fa202c924d86b092753cd7a5f4c1c7bb8b9f20bc\"" Mar 17 17:38:40.912890 kubelet[2512]: E0317 17:38:40.912866 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:40.915840 containerd[1432]: time="2025-03-17T17:38:40.915809178Z" level=info msg="CreateContainer within sandbox \"8d2a8dd2e7834456d493b7a6fa202c924d86b092753cd7a5f4c1c7bb8b9f20bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:38:40.931653 containerd[1432]: time="2025-03-17T17:38:40.931419463Z" level=info msg="CreateContainer within sandbox \"8d2a8dd2e7834456d493b7a6fa202c924d86b092753cd7a5f4c1c7bb8b9f20bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b10816161b6892bd00de9265af83d7099fc85d26a797823f7a3a5c50595affeb\"" Mar 17 17:38:40.932331 containerd[1432]: time="2025-03-17T17:38:40.932266278Z" level=info msg="StartContainer for \"b10816161b6892bd00de9265af83d7099fc85d26a797823f7a3a5c50595affeb\"" Mar 17 17:38:40.958834 systemd[1]: Started cri-containerd-b10816161b6892bd00de9265af83d7099fc85d26a797823f7a3a5c50595affeb.scope - libcontainer container b10816161b6892bd00de9265af83d7099fc85d26a797823f7a3a5c50595affeb. Mar 17 17:38:40.999535 containerd[1432]: time="2025-03-17T17:38:40.999451096Z" level=info msg="StartContainer for \"b10816161b6892bd00de9265af83d7099fc85d26a797823f7a3a5c50595affeb\" returns successfully" Mar 17 17:38:41.154778 containerd[1432]: time="2025-03-17T17:38:41.154578604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-mpk5r,Uid:de4386cb-5a6d-4207-bf76-7b73757176dc,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:38:41.176310 containerd[1432]: time="2025-03-17T17:38:41.176053288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:41.176310 containerd[1432]: time="2025-03-17T17:38:41.176115523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:41.176310 containerd[1432]: time="2025-03-17T17:38:41.176127362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:41.176310 containerd[1432]: time="2025-03-17T17:38:41.176217996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:41.197152 systemd[1]: Started cri-containerd-5031df1aed5ba8c758cc89d7a0925b141ccd0920bd3f010ac1d5b1b1828dd669.scope - libcontainer container 5031df1aed5ba8c758cc89d7a0925b141ccd0920bd3f010ac1d5b1b1828dd669. Mar 17 17:38:41.226950 containerd[1432]: time="2025-03-17T17:38:41.226894724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-mpk5r,Uid:de4386cb-5a6d-4207-bf76-7b73757176dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5031df1aed5ba8c758cc89d7a0925b141ccd0920bd3f010ac1d5b1b1828dd669\"" Mar 17 17:38:41.229816 containerd[1432]: time="2025-03-17T17:38:41.229763476Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:38:42.012974 kubelet[2512]: E0317 17:38:42.012881 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:43.016133 kubelet[2512]: E0317 17:38:43.016101 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:43.164312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180021659.mount: Deactivated successfully. Mar 17 17:38:43.399612 containerd[1432]: time="2025-03-17T17:38:43.399491037Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:43.400081 containerd[1432]: time="2025-03-17T17:38:43.400044161Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 17 17:38:43.400927 containerd[1432]: time="2025-03-17T17:38:43.400899106Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:43.403290 containerd[1432]: time="2025-03-17T17:38:43.403244593Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:43.403938 containerd[1432]: time="2025-03-17T17:38:43.403913270Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 2.174109637s" Mar 17 17:38:43.403988 containerd[1432]: time="2025-03-17T17:38:43.403944428Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 17 17:38:43.408548 containerd[1432]: time="2025-03-17T17:38:43.408380459Z" level=info msg="CreateContainer within sandbox \"5031df1aed5ba8c758cc89d7a0925b141ccd0920bd3f010ac1d5b1b1828dd669\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:38:43.417732 containerd[1432]: time="2025-03-17T17:38:43.417693333Z" level=info msg="CreateContainer within sandbox \"5031df1aed5ba8c758cc89d7a0925b141ccd0920bd3f010ac1d5b1b1828dd669\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c22d7e28456e841db8760aba368cdfbb6148ae01661448d4bdabde0b68e7df0f\"" Mar 17 17:38:43.418499 containerd[1432]: time="2025-03-17T17:38:43.418449164Z" level=info msg="StartContainer for \"c22d7e28456e841db8760aba368cdfbb6148ae01661448d4bdabde0b68e7df0f\"" Mar 17 17:38:43.462775 systemd[1]: Started cri-containerd-c22d7e28456e841db8760aba368cdfbb6148ae01661448d4bdabde0b68e7df0f.scope - libcontainer container c22d7e28456e841db8760aba368cdfbb6148ae01661448d4bdabde0b68e7df0f. Mar 17 17:38:43.527939 containerd[1432]: time="2025-03-17T17:38:43.527766013Z" level=info msg="StartContainer for \"c22d7e28456e841db8760aba368cdfbb6148ae01661448d4bdabde0b68e7df0f\" returns successfully" Mar 17 17:38:44.027374 kubelet[2512]: I0317 17:38:44.027273 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2v9t6" podStartSLOduration=4.027256613 podStartE2EDuration="4.027256613s" podCreationTimestamp="2025-03-17 17:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:42.017791758 +0000 UTC m=+8.141759417" watchObservedRunningTime="2025-03-17 17:38:44.027256613 +0000 UTC m=+10.151224232" Mar 17 17:38:44.028295 kubelet[2512]: I0317 17:38:44.028106 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-mpk5r" podStartSLOduration=1.849250764 podStartE2EDuration="4.028096641s" podCreationTimestamp="2025-03-17 17:38:40 +0000 UTC" firstStartedPulling="2025-03-17 17:38:41.228195509 +0000 UTC m=+7.352163168" lastFinishedPulling="2025-03-17 17:38:43.407041386 +0000 UTC m=+9.531009045" observedRunningTime="2025-03-17 17:38:44.027186337 +0000 UTC m=+10.151153996" watchObservedRunningTime="2025-03-17 17:38:44.028096641 +0000 UTC m=+10.152064300" Mar 17 17:38:45.722143 kubelet[2512]: E0317 17:38:45.722091 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:46.995157 kubelet[2512]: E0317 17:38:46.995066 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:47.727028 systemd[1]: Created slice kubepods-besteffort-podb00b20fb_31bf_4637_8aff_93e7f97f7de2.slice - libcontainer container kubepods-besteffort-podb00b20fb_31bf_4637_8aff_93e7f97f7de2.slice. Mar 17 17:38:47.732939 kubelet[2512]: I0317 17:38:47.730017 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b00b20fb-31bf-4637-8aff-93e7f97f7de2-typha-certs\") pod \"calico-typha-644b7d5bfb-fl8rt\" (UID: \"b00b20fb-31bf-4637-8aff-93e7f97f7de2\") " pod="calico-system/calico-typha-644b7d5bfb-fl8rt" Mar 17 17:38:47.732939 kubelet[2512]: I0317 17:38:47.730195 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhffd\" (UniqueName: \"kubernetes.io/projected/b00b20fb-31bf-4637-8aff-93e7f97f7de2-kube-api-access-rhffd\") pod \"calico-typha-644b7d5bfb-fl8rt\" (UID: \"b00b20fb-31bf-4637-8aff-93e7f97f7de2\") " pod="calico-system/calico-typha-644b7d5bfb-fl8rt" Mar 17 17:38:47.732939 kubelet[2512]: I0317 17:38:47.730226 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b00b20fb-31bf-4637-8aff-93e7f97f7de2-tigera-ca-bundle\") pod \"calico-typha-644b7d5bfb-fl8rt\" (UID: \"b00b20fb-31bf-4637-8aff-93e7f97f7de2\") " pod="calico-system/calico-typha-644b7d5bfb-fl8rt" Mar 17 17:38:47.764437 systemd[1]: Created slice kubepods-besteffort-pod08438b59_dd76_48a8_af38_c962e3ad9fc2.slice - libcontainer container kubepods-besteffort-pod08438b59_dd76_48a8_af38_c962e3ad9fc2.slice. Mar 17 17:38:47.831168 kubelet[2512]: I0317 17:38:47.831133 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-xtables-lock\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.831168 kubelet[2512]: I0317 17:38:47.831169 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-lib-modules\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832284 kubelet[2512]: I0317 17:38:47.831187 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-lib-calico\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832284 kubelet[2512]: I0317 17:38:47.831229 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08438b59-dd76-48a8-af38-c962e3ad9fc2-tigera-ca-bundle\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832284 kubelet[2512]: I0317 17:38:47.831246 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-net-dir\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832284 kubelet[2512]: I0317 17:38:47.831270 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-policysync\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832284 kubelet[2512]: I0317 17:38:47.831286 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-log-dir\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832398 kubelet[2512]: I0317 17:38:47.831303 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-flexvol-driver-host\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832398 kubelet[2512]: I0317 17:38:47.831320 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08438b59-dd76-48a8-af38-c962e3ad9fc2-node-certs\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832398 kubelet[2512]: I0317 17:38:47.831346 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmktg\" (UniqueName: \"kubernetes.io/projected/08438b59-dd76-48a8-af38-c962e3ad9fc2-kube-api-access-jmktg\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832398 kubelet[2512]: I0317 17:38:47.831368 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-run-calico\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.832398 kubelet[2512]: I0317 17:38:47.831385 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-bin-dir\") pod \"calico-node-tp8wk\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " pod="calico-system/calico-node-tp8wk" Mar 17 17:38:47.876445 kubelet[2512]: E0317 17:38:47.876260 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:47.938400 kubelet[2512]: I0317 17:38:47.936933 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cd6988e4-6af5-42c1-bd82-b51b176a8f5e-varrun\") pod \"csi-node-driver-wbfxc\" (UID: \"cd6988e4-6af5-42c1-bd82-b51b176a8f5e\") " pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:47.938400 kubelet[2512]: I0317 17:38:47.936970 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc58\" (UniqueName: \"kubernetes.io/projected/cd6988e4-6af5-42c1-bd82-b51b176a8f5e-kube-api-access-qlc58\") pod \"csi-node-driver-wbfxc\" (UID: \"cd6988e4-6af5-42c1-bd82-b51b176a8f5e\") " pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:47.938400 kubelet[2512]: I0317 17:38:47.937042 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cd6988e4-6af5-42c1-bd82-b51b176a8f5e-socket-dir\") pod \"csi-node-driver-wbfxc\" (UID: \"cd6988e4-6af5-42c1-bd82-b51b176a8f5e\") " pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:47.938400 kubelet[2512]: I0317 17:38:47.937057 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cd6988e4-6af5-42c1-bd82-b51b176a8f5e-registration-dir\") pod \"csi-node-driver-wbfxc\" (UID: \"cd6988e4-6af5-42c1-bd82-b51b176a8f5e\") " pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:47.938400 kubelet[2512]: I0317 17:38:47.937093 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6988e4-6af5-42c1-bd82-b51b176a8f5e-kubelet-dir\") pod \"csi-node-driver-wbfxc\" (UID: \"cd6988e4-6af5-42c1-bd82-b51b176a8f5e\") " pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:47.955523 kubelet[2512]: E0317 17:38:47.955493 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:47.955698 kubelet[2512]: W0317 17:38:47.955681 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:47.955810 kubelet[2512]: E0317 17:38:47.955797 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:47.961852 kubelet[2512]: E0317 17:38:47.961829 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:47.961965 kubelet[2512]: W0317 17:38:47.961950 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:47.962025 kubelet[2512]: E0317 17:38:47.962014 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.032715 kubelet[2512]: E0317 17:38:48.032599 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:48.033240 containerd[1432]: time="2025-03-17T17:38:48.032974181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644b7d5bfb-fl8rt,Uid:b00b20fb-31bf-4637-8aff-93e7f97f7de2,Namespace:calico-system,Attempt:0,}" Mar 17 17:38:48.038221 kubelet[2512]: E0317 17:38:48.038198 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.038221 kubelet[2512]: W0317 17:38:48.038218 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.038341 kubelet[2512]: E0317 17:38:48.038236 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.038424 kubelet[2512]: E0317 17:38:48.038410 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.038424 kubelet[2512]: W0317 17:38:48.038421 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.038538 kubelet[2512]: E0317 17:38:48.038432 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.038613 kubelet[2512]: E0317 17:38:48.038603 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.038613 kubelet[2512]: W0317 17:38:48.038615 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.038613 kubelet[2512]: E0317 17:38:48.038633 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.038899 kubelet[2512]: E0317 17:38:48.038787 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.038899 kubelet[2512]: W0317 17:38:48.038798 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.038899 kubelet[2512]: E0317 17:38:48.038806 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.039094 kubelet[2512]: E0317 17:38:48.039056 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.039094 kubelet[2512]: W0317 17:38:48.039077 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.039223 kubelet[2512]: E0317 17:38:48.039161 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.039420 kubelet[2512]: E0317 17:38:48.039408 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.039557 kubelet[2512]: W0317 17:38:48.039468 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.039557 kubelet[2512]: E0317 17:38:48.039489 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.039728 kubelet[2512]: E0317 17:38:48.039715 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.039780 kubelet[2512]: W0317 17:38:48.039769 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.039862 kubelet[2512]: E0317 17:38:48.039842 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.040077 kubelet[2512]: E0317 17:38:48.040063 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.040168 kubelet[2512]: W0317 17:38:48.040134 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.040168 kubelet[2512]: E0317 17:38:48.040164 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.040505 kubelet[2512]: E0317 17:38:48.040412 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.040505 kubelet[2512]: W0317 17:38:48.040424 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.040505 kubelet[2512]: E0317 17:38:48.040481 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.040809 kubelet[2512]: E0317 17:38:48.040720 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.040809 kubelet[2512]: W0317 17:38:48.040732 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.040809 kubelet[2512]: E0317 17:38:48.040790 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.041175 kubelet[2512]: E0317 17:38:48.041046 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.041175 kubelet[2512]: W0317 17:38:48.041058 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.041175 kubelet[2512]: E0317 17:38:48.041153 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.041521 kubelet[2512]: E0317 17:38:48.041418 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.041521 kubelet[2512]: W0317 17:38:48.041431 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.041521 kubelet[2512]: E0317 17:38:48.041488 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.041834 kubelet[2512]: E0317 17:38:48.041614 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.041834 kubelet[2512]: W0317 17:38:48.041744 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.041834 kubelet[2512]: E0317 17:38:48.041812 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.041988 kubelet[2512]: E0317 17:38:48.041974 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.042051 kubelet[2512]: W0317 17:38:48.042039 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.042209 kubelet[2512]: E0317 17:38:48.042152 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.042335 kubelet[2512]: E0317 17:38:48.042309 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.042335 kubelet[2512]: W0317 17:38:48.042321 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.042487 kubelet[2512]: E0317 17:38:48.042443 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.042604 kubelet[2512]: E0317 17:38:48.042573 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.042604 kubelet[2512]: W0317 17:38:48.042582 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.042776 kubelet[2512]: E0317 17:38:48.042755 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.042960 kubelet[2512]: E0317 17:38:48.042934 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.042960 kubelet[2512]: W0317 17:38:48.042946 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.043098 kubelet[2512]: E0317 17:38:48.043041 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.043372 kubelet[2512]: E0317 17:38:48.043341 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.043372 kubelet[2512]: W0317 17:38:48.043354 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.043602 kubelet[2512]: E0317 17:38:48.043586 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.043707 kubelet[2512]: E0317 17:38:48.043696 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.043831 kubelet[2512]: W0317 17:38:48.043748 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.043831 kubelet[2512]: E0317 17:38:48.043770 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.044118 kubelet[2512]: E0317 17:38:48.044078 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.044118 kubelet[2512]: W0317 17:38:48.044092 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.044118 kubelet[2512]: E0317 17:38:48.044115 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.044511 kubelet[2512]: E0317 17:38:48.044399 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.044511 kubelet[2512]: W0317 17:38:48.044412 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.044511 kubelet[2512]: E0317 17:38:48.044479 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.044813 kubelet[2512]: E0317 17:38:48.044719 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.044813 kubelet[2512]: W0317 17:38:48.044733 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.044813 kubelet[2512]: E0317 17:38:48.044790 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.045157 kubelet[2512]: E0317 17:38:48.045037 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.045157 kubelet[2512]: W0317 17:38:48.045049 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.045157 kubelet[2512]: E0317 17:38:48.045122 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.045505 kubelet[2512]: E0317 17:38:48.045411 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.045505 kubelet[2512]: W0317 17:38:48.045425 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.045505 kubelet[2512]: E0317 17:38:48.045446 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.045935 kubelet[2512]: E0317 17:38:48.045791 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.045935 kubelet[2512]: W0317 17:38:48.045805 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.045935 kubelet[2512]: E0317 17:38:48.045818 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.058104 kubelet[2512]: E0317 17:38:48.058027 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:48.058104 kubelet[2512]: W0317 17:38:48.058048 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:48.058104 kubelet[2512]: E0317 17:38:48.058069 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:48.067330 kubelet[2512]: E0317 17:38:48.067268 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:48.069138 containerd[1432]: time="2025-03-17T17:38:48.068879178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tp8wk,Uid:08438b59-dd76-48a8-af38-c962e3ad9fc2,Namespace:calico-system,Attempt:0,}" Mar 17 17:38:48.086943 containerd[1432]: time="2025-03-17T17:38:48.086831157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:48.086943 containerd[1432]: time="2025-03-17T17:38:48.086881354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:48.086943 containerd[1432]: time="2025-03-17T17:38:48.086892674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:48.087261 containerd[1432]: time="2025-03-17T17:38:48.087024467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:48.099647 containerd[1432]: time="2025-03-17T17:38:48.099494081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:48.099647 containerd[1432]: time="2025-03-17T17:38:48.099586516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:48.099910 containerd[1432]: time="2025-03-17T17:38:48.099612795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:48.099978 containerd[1432]: time="2025-03-17T17:38:48.099893221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:48.105729 update_engine[1424]: I20250317 17:38:48.105170 1424 update_attempter.cc:509] Updating boot flags... Mar 17 17:38:48.107771 systemd[1]: Started cri-containerd-a231e10d8e93c0e1e4fecc4abc5deebe9918f323b6c6e5898bf3841e0c0995c4.scope - libcontainer container a231e10d8e93c0e1e4fecc4abc5deebe9918f323b6c6e5898bf3841e0c0995c4. Mar 17 17:38:48.112598 systemd[1]: Started cri-containerd-ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af.scope - libcontainer container ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af. Mar 17 17:38:48.153748 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3017) Mar 17 17:38:48.160461 containerd[1432]: time="2025-03-17T17:38:48.160245590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644b7d5bfb-fl8rt,Uid:b00b20fb-31bf-4637-8aff-93e7f97f7de2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a231e10d8e93c0e1e4fecc4abc5deebe9918f323b6c6e5898bf3841e0c0995c4\"" Mar 17 17:38:48.162545 kubelet[2512]: E0317 17:38:48.161220 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:48.163231 containerd[1432]: time="2025-03-17T17:38:48.163181563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:38:48.165433 containerd[1432]: time="2025-03-17T17:38:48.165399332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tp8wk,Uid:08438b59-dd76-48a8-af38-c962e3ad9fc2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\"" Mar 17 17:38:48.168645 kubelet[2512]: E0317 17:38:48.167122 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:48.187696 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3021) Mar 17 17:38:49.965289 containerd[1432]: time="2025-03-17T17:38:49.965243101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:49.969433 containerd[1432]: time="2025-03-17T17:38:49.969373863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 17 17:38:49.970947 containerd[1432]: time="2025-03-17T17:38:49.970910710Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:49.971116 kubelet[2512]: E0317 17:38:49.971083 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:49.974155 containerd[1432]: time="2025-03-17T17:38:49.973775333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:49.974566 containerd[1432]: time="2025-03-17T17:38:49.974520697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.811289697s" Mar 17 17:38:49.974566 containerd[1432]: time="2025-03-17T17:38:49.974555416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 17 17:38:49.976362 containerd[1432]: time="2025-03-17T17:38:49.976296812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:38:49.989545 containerd[1432]: time="2025-03-17T17:38:49.989514381Z" level=info msg="CreateContainer within sandbox \"a231e10d8e93c0e1e4fecc4abc5deebe9918f323b6c6e5898bf3841e0c0995c4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:38:50.010848 containerd[1432]: time="2025-03-17T17:38:50.010788948Z" level=info msg="CreateContainer within sandbox \"a231e10d8e93c0e1e4fecc4abc5deebe9918f323b6c6e5898bf3841e0c0995c4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520\"" Mar 17 17:38:50.011846 containerd[1432]: time="2025-03-17T17:38:50.011584111Z" level=info msg="StartContainer for \"5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520\"" Mar 17 17:38:50.044839 systemd[1]: Started cri-containerd-5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520.scope - libcontainer container 5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520. Mar 17 17:38:50.137725 containerd[1432]: time="2025-03-17T17:38:50.137574819Z" level=info msg="StartContainer for \"5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520\" returns successfully" Mar 17 17:38:51.048164 kubelet[2512]: E0317 17:38:51.048071 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:51.061186 kubelet[2512]: E0317 17:38:51.060931 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.061186 kubelet[2512]: W0317 17:38:51.060981 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.061186 kubelet[2512]: E0317 17:38:51.061005 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.062692 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064645 kubelet[2512]: W0317 17:38:51.062751 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.062791 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.063032 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064645 kubelet[2512]: W0317 17:38:51.063042 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.063052 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.063264 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064645 kubelet[2512]: W0317 17:38:51.063273 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.063283 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064645 kubelet[2512]: E0317 17:38:51.063659 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064952 kubelet[2512]: W0317 17:38:51.063671 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.064952 kubelet[2512]: E0317 17:38:51.063682 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064952 kubelet[2512]: E0317 17:38:51.063966 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064952 kubelet[2512]: W0317 17:38:51.063975 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.064952 kubelet[2512]: E0317 17:38:51.063987 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064952 kubelet[2512]: E0317 17:38:51.064153 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064952 kubelet[2512]: W0317 17:38:51.064161 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.064952 kubelet[2512]: E0317 17:38:51.064170 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.064952 kubelet[2512]: E0317 17:38:51.064367 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.064952 kubelet[2512]: W0317 17:38:51.064376 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.064384 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.064682 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.065169 kubelet[2512]: W0317 17:38:51.064699 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.064718 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.064907 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.065169 kubelet[2512]: W0317 17:38:51.064918 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.064927 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.065103 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.065169 kubelet[2512]: W0317 17:38:51.065125 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.065169 kubelet[2512]: E0317 17:38:51.065136 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.065382 kubelet[2512]: E0317 17:38:51.065293 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.065382 kubelet[2512]: W0317 17:38:51.065301 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.065382 kubelet[2512]: E0317 17:38:51.065308 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.065513 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.070175 kubelet[2512]: W0317 17:38:51.065527 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.065537 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.065720 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.070175 kubelet[2512]: W0317 17:38:51.065729 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.065737 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.065890 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.070175 kubelet[2512]: W0317 17:38:51.065899 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.065908 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.070175 kubelet[2512]: E0317 17:38:51.066252 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.071046 kubelet[2512]: W0317 17:38:51.066266 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.071046 kubelet[2512]: E0317 17:38:51.066278 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.071046 kubelet[2512]: E0317 17:38:51.066572 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.071046 kubelet[2512]: W0317 17:38:51.066588 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.071046 kubelet[2512]: E0317 17:38:51.066600 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.071046 kubelet[2512]: E0317 17:38:51.066789 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.071046 kubelet[2512]: W0317 17:38:51.066806 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.071046 kubelet[2512]: E0317 17:38:51.066815 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.071046 kubelet[2512]: E0317 17:38:51.066996 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.071046 kubelet[2512]: W0317 17:38:51.067005 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.071286 kubelet[2512]: E0317 17:38:51.067015 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.071286 kubelet[2512]: E0317 17:38:51.069950 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.071286 kubelet[2512]: W0317 17:38:51.069967 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.071286 kubelet[2512]: E0317 17:38:51.069983 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.071286 kubelet[2512]: E0317 17:38:51.070792 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.072235 kubelet[2512]: W0317 17:38:51.070806 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.072235 kubelet[2512]: E0317 17:38:51.071565 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.072235 kubelet[2512]: E0317 17:38:51.071872 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.072235 kubelet[2512]: W0317 17:38:51.071885 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.072235 kubelet[2512]: E0317 17:38:51.071904 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.072235 kubelet[2512]: E0317 17:38:51.072137 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.072235 kubelet[2512]: W0317 17:38:51.072149 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.072235 kubelet[2512]: E0317 17:38:51.072227 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.072790 kubelet[2512]: E0317 17:38:51.072550 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.072790 kubelet[2512]: W0317 17:38:51.072598 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.072790 kubelet[2512]: E0317 17:38:51.072644 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.073090 kubelet[2512]: E0317 17:38:51.072979 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.073090 kubelet[2512]: W0317 17:38:51.072995 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.073090 kubelet[2512]: E0317 17:38:51.073043 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.076689 kubelet[2512]: E0317 17:38:51.073959 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.076689 kubelet[2512]: W0317 17:38:51.073981 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.076689 kubelet[2512]: E0317 17:38:51.074002 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.076951 kubelet[2512]: E0317 17:38:51.076827 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.076951 kubelet[2512]: W0317 17:38:51.076843 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.076951 kubelet[2512]: E0317 17:38:51.076884 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.077080 kubelet[2512]: E0317 17:38:51.077059 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.077080 kubelet[2512]: W0317 17:38:51.077073 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.077612 kubelet[2512]: E0317 17:38:51.077109 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.077612 kubelet[2512]: E0317 17:38:51.077255 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.077612 kubelet[2512]: W0317 17:38:51.077265 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.077612 kubelet[2512]: E0317 17:38:51.077289 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.077612 kubelet[2512]: E0317 17:38:51.077424 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.077612 kubelet[2512]: W0317 17:38:51.077433 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.077612 kubelet[2512]: E0317 17:38:51.077450 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.077811 kubelet[2512]: E0317 17:38:51.077636 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.077811 kubelet[2512]: W0317 17:38:51.077646 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.077811 kubelet[2512]: E0317 17:38:51.077657 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.079661 kubelet[2512]: E0317 17:38:51.079628 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.079661 kubelet[2512]: W0317 17:38:51.079657 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.079833 kubelet[2512]: E0317 17:38:51.079683 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.080175 kubelet[2512]: E0317 17:38:51.080154 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:38:51.080175 kubelet[2512]: W0317 17:38:51.080170 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:38:51.080268 kubelet[2512]: E0317 17:38:51.080183 2512 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:38:51.127835 containerd[1432]: time="2025-03-17T17:38:51.127781274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:51.128666 containerd[1432]: time="2025-03-17T17:38:51.128253894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 17 17:38:51.129093 containerd[1432]: time="2025-03-17T17:38:51.129071618Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:51.131074 containerd[1432]: time="2025-03-17T17:38:51.131039773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:51.131849 containerd[1432]: time="2025-03-17T17:38:51.131820259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.155492008s" Mar 17 17:38:51.131912 containerd[1432]: time="2025-03-17T17:38:51.131854938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:38:51.134213 containerd[1432]: time="2025-03-17T17:38:51.134172877Z" level=info msg="CreateContainer within sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:38:51.147268 containerd[1432]: time="2025-03-17T17:38:51.147220591Z" level=info msg="CreateContainer within sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\"" Mar 17 17:38:51.149183 containerd[1432]: time="2025-03-17T17:38:51.147775607Z" level=info msg="StartContainer for \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\"" Mar 17 17:38:51.174783 systemd[1]: Started cri-containerd-c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7.scope - libcontainer container c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7. Mar 17 17:38:51.206767 containerd[1432]: time="2025-03-17T17:38:51.206717012Z" level=info msg="StartContainer for \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\" returns successfully" Mar 17 17:38:51.238760 systemd[1]: cri-containerd-c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7.scope: Deactivated successfully. Mar 17 17:38:51.280532 containerd[1432]: time="2025-03-17T17:38:51.275723300Z" level=info msg="shim disconnected" id=c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7 namespace=k8s.io Mar 17 17:38:51.280532 containerd[1432]: time="2025-03-17T17:38:51.280519012Z" level=warning msg="cleaning up after shim disconnected" id=c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7 namespace=k8s.io Mar 17 17:38:51.280532 containerd[1432]: time="2025-03-17T17:38:51.280532531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:51.965171 kubelet[2512]: E0317 17:38:51.965116 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:51.982595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7-rootfs.mount: Deactivated successfully. Mar 17 17:38:52.050765 kubelet[2512]: I0317 17:38:52.050729 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:38:52.051158 kubelet[2512]: E0317 17:38:52.051013 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:52.051158 kubelet[2512]: E0317 17:38:52.051026 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:52.052195 containerd[1432]: time="2025-03-17T17:38:52.052159299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:38:52.067079 kubelet[2512]: I0317 17:38:52.067015 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-644b7d5bfb-fl8rt" podStartSLOduration=3.253562413 podStartE2EDuration="5.066996685s" podCreationTimestamp="2025-03-17 17:38:47 +0000 UTC" firstStartedPulling="2025-03-17 17:38:48.162350045 +0000 UTC m=+14.286317704" lastFinishedPulling="2025-03-17 17:38:49.975784317 +0000 UTC m=+16.099751976" observedRunningTime="2025-03-17 17:38:51.083022655 +0000 UTC m=+17.206990314" watchObservedRunningTime="2025-03-17 17:38:52.066996685 +0000 UTC m=+18.190964344" Mar 17 17:38:53.968656 kubelet[2512]: E0317 17:38:53.966428 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:55.965098 kubelet[2512]: E0317 17:38:55.965058 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:56.164947 containerd[1432]: time="2025-03-17T17:38:56.164906045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:56.166033 containerd[1432]: time="2025-03-17T17:38:56.165988087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:38:56.167234 containerd[1432]: time="2025-03-17T17:38:56.167195486Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:56.169921 containerd[1432]: time="2025-03-17T17:38:56.169886593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:56.171665 containerd[1432]: time="2025-03-17T17:38:56.171633332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 4.119418036s" Mar 17 17:38:56.171711 containerd[1432]: time="2025-03-17T17:38:56.171664411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:38:56.174480 containerd[1432]: time="2025-03-17T17:38:56.174443956Z" level=info msg="CreateContainer within sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:38:56.190146 containerd[1432]: time="2025-03-17T17:38:56.190104455Z" level=info msg="CreateContainer within sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\"" Mar 17 17:38:56.191932 containerd[1432]: time="2025-03-17T17:38:56.191901673Z" level=info msg="StartContainer for \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\"" Mar 17 17:38:56.215765 systemd[1]: Started cri-containerd-6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c.scope - libcontainer container 6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c. Mar 17 17:38:56.242558 containerd[1432]: time="2025-03-17T17:38:56.242520526Z" level=info msg="StartContainer for \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\" returns successfully" Mar 17 17:38:56.820666 systemd[1]: cri-containerd-6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c.scope: Deactivated successfully. Mar 17 17:38:56.840218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c-rootfs.mount: Deactivated successfully. Mar 17 17:38:56.861140 kubelet[2512]: I0317 17:38:56.861059 2512 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:38:56.878025 containerd[1432]: time="2025-03-17T17:38:56.877525695Z" level=info msg="shim disconnected" id=6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c namespace=k8s.io Mar 17 17:38:56.878025 containerd[1432]: time="2025-03-17T17:38:56.877578573Z" level=warning msg="cleaning up after shim disconnected" id=6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c namespace=k8s.io Mar 17 17:38:56.878025 containerd[1432]: time="2025-03-17T17:38:56.877595973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:56.906008 systemd[1]: Created slice kubepods-burstable-podfb2cacc2_3045_4f05_a115_8af2b8c3ae93.slice - libcontainer container kubepods-burstable-podfb2cacc2_3045_4f05_a115_8af2b8c3ae93.slice. Mar 17 17:38:56.912520 systemd[1]: Created slice kubepods-besteffort-pod99b568dd_b905_4754_b6a7_db2767a8c584.slice - libcontainer container kubepods-besteffort-pod99b568dd_b905_4754_b6a7_db2767a8c584.slice. Mar 17 17:38:56.917043 systemd[1]: Created slice kubepods-besteffort-podf65e465b_a83f_4cf0_bb51_23b66ac6541f.slice - libcontainer container kubepods-besteffort-podf65e465b_a83f_4cf0_bb51_23b66ac6541f.slice. Mar 17 17:38:56.923044 systemd[1]: Created slice kubepods-burstable-pod4bcadc09_d993_4b6b_a06f_decd561e1fef.slice - libcontainer container kubepods-burstable-pod4bcadc09_d993_4b6b_a06f_decd561e1fef.slice. Mar 17 17:38:56.941088 systemd[1]: Created slice kubepods-besteffort-pod7dd0b36a_0dc5_4c34_a561_3245bd3255c4.slice - libcontainer container kubepods-besteffort-pod7dd0b36a_0dc5_4c34_a561_3245bd3255c4.slice. Mar 17 17:38:57.059563 kubelet[2512]: E0317 17:38:57.059535 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:57.060356 containerd[1432]: time="2025-03-17T17:38:57.060111640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:38:57.074242 kubelet[2512]: I0317 17:38:57.074078 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99b568dd-b905-4754-b6a7-db2767a8c584-calico-apiserver-certs\") pod \"calico-apiserver-6b59c58749-s4pv2\" (UID: \"99b568dd-b905-4754-b6a7-db2767a8c584\") " pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:57.074242 kubelet[2512]: I0317 17:38:57.074126 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zjs\" (UniqueName: \"kubernetes.io/projected/99b568dd-b905-4754-b6a7-db2767a8c584-kube-api-access-k2zjs\") pod \"calico-apiserver-6b59c58749-s4pv2\" (UID: \"99b568dd-b905-4754-b6a7-db2767a8c584\") " pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:57.074242 kubelet[2512]: I0317 17:38:57.074220 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgvc\" (UniqueName: \"kubernetes.io/projected/fb2cacc2-3045-4f05-a115-8af2b8c3ae93-kube-api-access-9pgvc\") pod \"coredns-6f6b679f8f-smb5v\" (UID: \"fb2cacc2-3045-4f05-a115-8af2b8c3ae93\") " pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:57.074401 kubelet[2512]: I0317 17:38:57.074266 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b25\" (UniqueName: \"kubernetes.io/projected/4bcadc09-d993-4b6b-a06f-decd561e1fef-kube-api-access-f2b25\") pod \"coredns-6f6b679f8f-4wvdn\" (UID: \"4bcadc09-d993-4b6b-a06f-decd561e1fef\") " pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:57.074401 kubelet[2512]: I0317 17:38:57.074299 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f65e465b-a83f-4cf0-bb51-23b66ac6541f-calico-apiserver-certs\") pod \"calico-apiserver-6b59c58749-tz5hx\" (UID: \"f65e465b-a83f-4cf0-bb51-23b66ac6541f\") " pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:57.074401 kubelet[2512]: I0317 17:38:57.074361 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh2jq\" (UniqueName: \"kubernetes.io/projected/f65e465b-a83f-4cf0-bb51-23b66ac6541f-kube-api-access-nh2jq\") pod \"calico-apiserver-6b59c58749-tz5hx\" (UID: \"f65e465b-a83f-4cf0-bb51-23b66ac6541f\") " pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:57.074401 kubelet[2512]: I0317 17:38:57.074377 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcadc09-d993-4b6b-a06f-decd561e1fef-config-volume\") pod \"coredns-6f6b679f8f-4wvdn\" (UID: \"4bcadc09-d993-4b6b-a06f-decd561e1fef\") " pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:57.074492 kubelet[2512]: I0317 17:38:57.074413 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-tigera-ca-bundle\") pod \"calico-kube-controllers-ff5ffdc75-plj5f\" (UID: \"7dd0b36a-0dc5-4c34-a561-3245bd3255c4\") " pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:57.074492 kubelet[2512]: I0317 17:38:57.074442 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pct4\" (UniqueName: \"kubernetes.io/projected/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-kube-api-access-4pct4\") pod \"calico-kube-controllers-ff5ffdc75-plj5f\" (UID: \"7dd0b36a-0dc5-4c34-a561-3245bd3255c4\") " pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:57.074492 kubelet[2512]: I0317 17:38:57.074470 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb2cacc2-3045-4f05-a115-8af2b8c3ae93-config-volume\") pod \"coredns-6f6b679f8f-smb5v\" (UID: \"fb2cacc2-3045-4f05-a115-8af2b8c3ae93\") " pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:57.212270 kubelet[2512]: E0317 17:38:57.211861 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:57.212749 containerd[1432]: time="2025-03-17T17:38:57.212710876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:57.215444 containerd[1432]: time="2025-03-17T17:38:57.215416387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:38:57.221691 containerd[1432]: time="2025-03-17T17:38:57.221655460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:38:57.237943 kubelet[2512]: E0317 17:38:57.237913 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:57.248003 containerd[1432]: time="2025-03-17T17:38:57.247781397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:57.265548 containerd[1432]: time="2025-03-17T17:38:57.265523090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:0,}" Mar 17 17:38:57.617030 containerd[1432]: time="2025-03-17T17:38:57.616967914Z" level=error msg="Failed to destroy network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.618489 containerd[1432]: time="2025-03-17T17:38:57.618452945Z" level=error msg="Failed to destroy network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.619567 containerd[1432]: time="2025-03-17T17:38:57.619530549Z" level=error msg="encountered an error cleaning up failed sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.619647 containerd[1432]: time="2025-03-17T17:38:57.619599907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.622563 kubelet[2512]: E0317 17:38:57.622401 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.622563 kubelet[2512]: E0317 17:38:57.622477 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:57.622563 kubelet[2512]: E0317 17:38:57.622496 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:57.622780 kubelet[2512]: E0317 17:38:57.622549 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" podUID="7dd0b36a-0dc5-4c34-a561-3245bd3255c4" Mar 17 17:38:57.625070 containerd[1432]: time="2025-03-17T17:38:57.625016808Z" level=error msg="Failed to destroy network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.625331 containerd[1432]: time="2025-03-17T17:38:57.625300358Z" level=error msg="encountered an error cleaning up failed sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.625393 containerd[1432]: time="2025-03-17T17:38:57.625373396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.625684 kubelet[2512]: E0317 17:38:57.625528 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.625684 kubelet[2512]: E0317 17:38:57.625567 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:57.625684 kubelet[2512]: E0317 17:38:57.625591 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:57.625806 kubelet[2512]: E0317 17:38:57.625643 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" podUID="99b568dd-b905-4754-b6a7-db2767a8c584" Mar 17 17:38:57.629322 containerd[1432]: time="2025-03-17T17:38:57.629155191Z" level=error msg="encountered an error cleaning up failed sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.629322 containerd[1432]: time="2025-03-17T17:38:57.629217229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.629453 kubelet[2512]: E0317 17:38:57.629360 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.629453 kubelet[2512]: E0317 17:38:57.629394 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:57.629453 kubelet[2512]: E0317 17:38:57.629411 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:57.630541 kubelet[2512]: E0317 17:38:57.629445 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-smb5v" podUID="fb2cacc2-3045-4f05-a115-8af2b8c3ae93" Mar 17 17:38:57.634810 containerd[1432]: time="2025-03-17T17:38:57.634703808Z" level=error msg="Failed to destroy network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.635040 containerd[1432]: time="2025-03-17T17:38:57.635001398Z" level=error msg="encountered an error cleaning up failed sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.635106 containerd[1432]: time="2025-03-17T17:38:57.635077755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.635164 containerd[1432]: time="2025-03-17T17:38:57.635018597Z" level=error msg="Failed to destroy network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.636057 kubelet[2512]: E0317 17:38:57.635386 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.636057 kubelet[2512]: E0317 17:38:57.635422 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:57.636057 kubelet[2512]: E0317 17:38:57.635437 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:57.636189 containerd[1432]: time="2025-03-17T17:38:57.635394585Z" level=error msg="encountered an error cleaning up failed sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.636189 containerd[1432]: time="2025-03-17T17:38:57.635464303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.636239 kubelet[2512]: E0317 17:38:57.635465 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" podUID="f65e465b-a83f-4cf0-bb51-23b66ac6541f" Mar 17 17:38:57.636698 kubelet[2512]: E0317 17:38:57.636435 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:57.636698 kubelet[2512]: E0317 17:38:57.636470 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:57.636698 kubelet[2512]: E0317 17:38:57.636487 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:57.636819 kubelet[2512]: E0317 17:38:57.636519 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4wvdn" podUID="4bcadc09-d993-4b6b-a06f-decd561e1fef" Mar 17 17:38:57.971385 systemd[1]: Created slice kubepods-besteffort-podcd6988e4_6af5_42c1_bd82_b51b176a8f5e.slice - libcontainer container kubepods-besteffort-podcd6988e4_6af5_42c1_bd82_b51b176a8f5e.slice. Mar 17 17:38:57.973932 containerd[1432]: time="2025-03-17T17:38:57.973595446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:0,}" Mar 17 17:38:58.020615 containerd[1432]: time="2025-03-17T17:38:58.020549480Z" level=error msg="Failed to destroy network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.021155 containerd[1432]: time="2025-03-17T17:38:58.021042664Z" level=error msg="encountered an error cleaning up failed sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.021155 containerd[1432]: time="2025-03-17T17:38:58.021115502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.021572 kubelet[2512]: E0317 17:38:58.021471 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.021755 kubelet[2512]: E0317 17:38:58.021552 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:58.021755 kubelet[2512]: E0317 17:38:58.021665 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:58.021875 kubelet[2512]: E0317 17:38:58.021715 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:58.062891 kubelet[2512]: I0317 17:38:58.062814 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711" Mar 17 17:38:58.064679 containerd[1432]: time="2025-03-17T17:38:58.064320253Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:38:58.064679 containerd[1432]: time="2025-03-17T17:38:58.064477408Z" level=info msg="Ensure that sandbox c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711 in task-service has been cleanup successfully" Mar 17 17:38:58.065825 containerd[1432]: time="2025-03-17T17:38:58.065796526Z" level=info msg="TearDown network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" successfully" Mar 17 17:38:58.065825 containerd[1432]: time="2025-03-17T17:38:58.065818405Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" returns successfully" Mar 17 17:38:58.066767 containerd[1432]: time="2025-03-17T17:38:58.066740296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:38:58.067480 kubelet[2512]: I0317 17:38:58.067458 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca" Mar 17 17:38:58.067859 containerd[1432]: time="2025-03-17T17:38:58.067837021Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:38:58.068854 containerd[1432]: time="2025-03-17T17:38:58.068824950Z" level=info msg="Ensure that sandbox e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca in task-service has been cleanup successfully" Mar 17 17:38:58.069390 containerd[1432]: time="2025-03-17T17:38:58.069248737Z" level=info msg="TearDown network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" successfully" Mar 17 17:38:58.069390 containerd[1432]: time="2025-03-17T17:38:58.069271536Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" returns successfully" Mar 17 17:38:58.069744 kubelet[2512]: E0317 17:38:58.069609 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:58.069744 kubelet[2512]: I0317 17:38:58.069633 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3" Mar 17 17:38:58.070169 containerd[1432]: time="2025-03-17T17:38:58.069940155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:1,}" Mar 17 17:38:58.070201 containerd[1432]: time="2025-03-17T17:38:58.070186427Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:38:58.070361 containerd[1432]: time="2025-03-17T17:38:58.070317783Z" level=info msg="Ensure that sandbox 90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3 in task-service has been cleanup successfully" Mar 17 17:38:58.070527 containerd[1432]: time="2025-03-17T17:38:58.070503257Z" level=info msg="TearDown network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" successfully" Mar 17 17:38:58.070527 containerd[1432]: time="2025-03-17T17:38:58.070521656Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" returns successfully" Mar 17 17:38:58.071430 containerd[1432]: time="2025-03-17T17:38:58.071402668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:1,}" Mar 17 17:38:58.072679 kubelet[2512]: I0317 17:38:58.072244 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430" Mar 17 17:38:58.073586 containerd[1432]: time="2025-03-17T17:38:58.073389125Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:38:58.073586 containerd[1432]: time="2025-03-17T17:38:58.073517721Z" level=info msg="Ensure that sandbox 562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430 in task-service has been cleanup successfully" Mar 17 17:38:58.073943 containerd[1432]: time="2025-03-17T17:38:58.073726955Z" level=info msg="TearDown network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" successfully" Mar 17 17:38:58.073943 containerd[1432]: time="2025-03-17T17:38:58.073743234Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" returns successfully" Mar 17 17:38:58.074288 kubelet[2512]: I0317 17:38:58.074157 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf" Mar 17 17:38:58.075105 containerd[1432]: time="2025-03-17T17:38:58.074491970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:1,}" Mar 17 17:38:58.075105 containerd[1432]: time="2025-03-17T17:38:58.074569208Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:38:58.075105 containerd[1432]: time="2025-03-17T17:38:58.074737683Z" level=info msg="Ensure that sandbox 024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf in task-service has been cleanup successfully" Mar 17 17:38:58.075105 containerd[1432]: time="2025-03-17T17:38:58.074915237Z" level=info msg="TearDown network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" successfully" Mar 17 17:38:58.075105 containerd[1432]: time="2025-03-17T17:38:58.074930916Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" returns successfully" Mar 17 17:38:58.075611 kubelet[2512]: E0317 17:38:58.075410 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:58.076146 containerd[1432]: time="2025-03-17T17:38:58.076116079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:1,}" Mar 17 17:38:58.076710 kubelet[2512]: I0317 17:38:58.076685 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8" Mar 17 17:38:58.078549 containerd[1432]: time="2025-03-17T17:38:58.078504683Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:38:58.079099 containerd[1432]: time="2025-03-17T17:38:58.079060666Z" level=info msg="Ensure that sandbox 0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8 in task-service has been cleanup successfully" Mar 17 17:38:58.089531 containerd[1432]: time="2025-03-17T17:38:58.089489295Z" level=info msg="TearDown network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" successfully" Mar 17 17:38:58.089531 containerd[1432]: time="2025-03-17T17:38:58.089527494Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" returns successfully" Mar 17 17:38:58.090367 containerd[1432]: time="2025-03-17T17:38:58.090338988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:38:58.218492 containerd[1432]: time="2025-03-17T17:38:58.218318652Z" level=error msg="Failed to destroy network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.220684 containerd[1432]: time="2025-03-17T17:38:58.219436377Z" level=error msg="encountered an error cleaning up failed sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.220684 containerd[1432]: time="2025-03-17T17:38:58.219521774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.220818 kubelet[2512]: E0317 17:38:58.219764 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.220818 kubelet[2512]: E0317 17:38:58.219824 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:58.220818 kubelet[2512]: E0317 17:38:58.219843 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:58.220919 containerd[1432]: time="2025-03-17T17:38:58.220769815Z" level=error msg="Failed to destroy network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.220946 kubelet[2512]: E0317 17:38:58.219885 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" podUID="99b568dd-b905-4754-b6a7-db2767a8c584" Mar 17 17:38:58.222065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f-shm.mount: Deactivated successfully. Mar 17 17:38:58.224947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975-shm.mount: Deactivated successfully. Mar 17 17:38:58.225920 kubelet[2512]: E0317 17:38:58.224722 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.225920 kubelet[2512]: E0317 17:38:58.224769 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:58.225920 kubelet[2512]: E0317 17:38:58.224790 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:58.226138 containerd[1432]: time="2025-03-17T17:38:58.223743000Z" level=error msg="encountered an error cleaning up failed sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.226138 containerd[1432]: time="2025-03-17T17:38:58.223811558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.226239 kubelet[2512]: E0317 17:38:58.224827 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" podUID="7dd0b36a-0dc5-4c34-a561-3245bd3255c4" Mar 17 17:38:58.227544 containerd[1432]: time="2025-03-17T17:38:58.227440283Z" level=error msg="Failed to destroy network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.229760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e-shm.mount: Deactivated successfully. Mar 17 17:38:58.230195 containerd[1432]: time="2025-03-17T17:38:58.230055680Z" level=error msg="encountered an error cleaning up failed sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.230195 containerd[1432]: time="2025-03-17T17:38:58.230112958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.230312 kubelet[2512]: E0317 17:38:58.230277 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.230353 kubelet[2512]: E0317 17:38:58.230321 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:58.230353 kubelet[2512]: E0317 17:38:58.230346 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:58.230727 kubelet[2512]: E0317 17:38:58.230385 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-smb5v" podUID="fb2cacc2-3045-4f05-a115-8af2b8c3ae93" Mar 17 17:38:58.240041 containerd[1432]: time="2025-03-17T17:38:58.239062515Z" level=error msg="Failed to destroy network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.240304 containerd[1432]: time="2025-03-17T17:38:58.240265557Z" level=error msg="encountered an error cleaning up failed sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.240355 containerd[1432]: time="2025-03-17T17:38:58.240329635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.242935 kubelet[2512]: E0317 17:38:58.240785 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.242935 kubelet[2512]: E0317 17:38:58.241408 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:58.242935 kubelet[2512]: E0317 17:38:58.241428 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:58.242287 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724-shm.mount: Deactivated successfully. Mar 17 17:38:58.243322 kubelet[2512]: E0317 17:38:58.241462 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:58.250489 containerd[1432]: time="2025-03-17T17:38:58.250437474Z" level=error msg="Failed to destroy network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.250799 containerd[1432]: time="2025-03-17T17:38:58.250761744Z" level=error msg="encountered an error cleaning up failed sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.250993 containerd[1432]: time="2025-03-17T17:38:58.250972057Z" level=error msg="Failed to destroy network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.252668 containerd[1432]: time="2025-03-17T17:38:58.252571087Z" level=error msg="encountered an error cleaning up failed sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.254133 containerd[1432]: time="2025-03-17T17:38:58.254048480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.255384 containerd[1432]: time="2025-03-17T17:38:58.254092959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.255801 kubelet[2512]: E0317 17:38:58.255759 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.255843 kubelet[2512]: E0317 17:38:58.255820 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:58.255868 kubelet[2512]: E0317 17:38:58.255840 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:58.255909 kubelet[2512]: E0317 17:38:58.255875 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" podUID="f65e465b-a83f-4cf0-bb51-23b66ac6541f" Mar 17 17:38:58.256278 kubelet[2512]: E0317 17:38:58.256118 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:58.256278 kubelet[2512]: E0317 17:38:58.256148 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:58.256278 kubelet[2512]: E0317 17:38:58.256163 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:58.257576 kubelet[2512]: E0317 17:38:58.256274 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4wvdn" podUID="4bcadc09-d993-4b6b-a06f-decd561e1fef" Mar 17 17:38:59.079967 kubelet[2512]: I0317 17:38:59.079935 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e" Mar 17 17:38:59.082050 containerd[1432]: time="2025-03-17T17:38:59.081673233Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" Mar 17 17:38:59.082050 containerd[1432]: time="2025-03-17T17:38:59.081843068Z" level=info msg="Ensure that sandbox 832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e in task-service has been cleanup successfully" Mar 17 17:38:59.083279 kubelet[2512]: I0317 17:38:59.083051 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724" Mar 17 17:38:59.083612 containerd[1432]: time="2025-03-17T17:38:59.083492658Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" Mar 17 17:38:59.084106 containerd[1432]: time="2025-03-17T17:38:59.083964404Z" level=info msg="Ensure that sandbox fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724 in task-service has been cleanup successfully" Mar 17 17:38:59.084648 containerd[1432]: time="2025-03-17T17:38:59.084614344Z" level=info msg="TearDown network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" successfully" Mar 17 17:38:59.084648 containerd[1432]: time="2025-03-17T17:38:59.084643383Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" returns successfully" Mar 17 17:38:59.086003 kubelet[2512]: I0317 17:38:59.085219 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531" Mar 17 17:38:59.086122 containerd[1432]: time="2025-03-17T17:38:59.084988212Z" level=info msg="TearDown network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" successfully" Mar 17 17:38:59.086203 containerd[1432]: time="2025-03-17T17:38:59.086174896Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" returns successfully" Mar 17 17:38:59.086399 containerd[1432]: time="2025-03-17T17:38:59.085174967Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:38:59.086663 containerd[1432]: time="2025-03-17T17:38:59.086629963Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" Mar 17 17:38:59.087044 containerd[1432]: time="2025-03-17T17:38:59.086737719Z" level=info msg="TearDown network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" successfully" Mar 17 17:38:59.087044 containerd[1432]: time="2025-03-17T17:38:59.086760079Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" returns successfully" Mar 17 17:38:59.087121 containerd[1432]: time="2025-03-17T17:38:59.087051670Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:38:59.087146 containerd[1432]: time="2025-03-17T17:38:59.087119548Z" level=info msg="TearDown network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" successfully" Mar 17 17:38:59.087146 containerd[1432]: time="2025-03-17T17:38:59.087131587Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" returns successfully" Mar 17 17:38:59.087267 containerd[1432]: time="2025-03-17T17:38:59.087244104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:2,}" Mar 17 17:38:59.087481 kubelet[2512]: E0317 17:38:59.087428 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:59.088083 containerd[1432]: time="2025-03-17T17:38:59.087824046Z" level=info msg="Ensure that sandbox 0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531 in task-service has been cleanup successfully" Mar 17 17:38:59.088083 containerd[1432]: time="2025-03-17T17:38:59.088059039Z" level=info msg="TearDown network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" successfully" Mar 17 17:38:59.088083 containerd[1432]: time="2025-03-17T17:38:59.088074759Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" returns successfully" Mar 17 17:38:59.088581 containerd[1432]: time="2025-03-17T17:38:59.088544224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:2,}" Mar 17 17:38:59.089043 containerd[1432]: time="2025-03-17T17:38:59.088770537Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:38:59.089222 containerd[1432]: time="2025-03-17T17:38:59.089188485Z" level=info msg="TearDown network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" successfully" Mar 17 17:38:59.089293 containerd[1432]: time="2025-03-17T17:38:59.089279602Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" returns successfully" Mar 17 17:38:59.089517 kubelet[2512]: I0317 17:38:59.089476 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12" Mar 17 17:38:59.090802 containerd[1432]: time="2025-03-17T17:38:59.090749517Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" Mar 17 17:38:59.091271 kubelet[2512]: E0317 17:38:59.091184 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:38:59.091548 containerd[1432]: time="2025-03-17T17:38:59.091446216Z" level=info msg="Ensure that sandbox 80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12 in task-service has been cleanup successfully" Mar 17 17:38:59.091934 containerd[1432]: time="2025-03-17T17:38:59.091911682Z" level=info msg="TearDown network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" successfully" Mar 17 17:38:59.092079 containerd[1432]: time="2025-03-17T17:38:59.092002799Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" returns successfully" Mar 17 17:38:59.092405 containerd[1432]: time="2025-03-17T17:38:59.092294070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:2,}" Mar 17 17:38:59.092662 containerd[1432]: time="2025-03-17T17:38:59.092634780Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:38:59.093072 containerd[1432]: time="2025-03-17T17:38:59.093048887Z" level=info msg="TearDown network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" successfully" Mar 17 17:38:59.093072 containerd[1432]: time="2025-03-17T17:38:59.093071047Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" returns successfully" Mar 17 17:38:59.093182 kubelet[2512]: I0317 17:38:59.093082 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975" Mar 17 17:38:59.094052 containerd[1432]: time="2025-03-17T17:38:59.094019218Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" Mar 17 17:38:59.094204 containerd[1432]: time="2025-03-17T17:38:59.094169013Z" level=info msg="Ensure that sandbox bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975 in task-service has been cleanup successfully" Mar 17 17:38:59.095224 containerd[1432]: time="2025-03-17T17:38:59.094382927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:38:59.095224 containerd[1432]: time="2025-03-17T17:38:59.095180022Z" level=info msg="TearDown network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" successfully" Mar 17 17:38:59.095320 containerd[1432]: time="2025-03-17T17:38:59.095201862Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" returns successfully" Mar 17 17:38:59.095962 containerd[1432]: time="2025-03-17T17:38:59.095655368Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:38:59.095962 containerd[1432]: time="2025-03-17T17:38:59.095736286Z" level=info msg="TearDown network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" successfully" Mar 17 17:38:59.095962 containerd[1432]: time="2025-03-17T17:38:59.095770405Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" returns successfully" Mar 17 17:38:59.096766 kubelet[2512]: I0317 17:38:59.096465 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f" Mar 17 17:38:59.097333 containerd[1432]: time="2025-03-17T17:38:59.097296758Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" Mar 17 17:38:59.097396 containerd[1432]: time="2025-03-17T17:38:59.097346557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:2,}" Mar 17 17:38:59.097469 containerd[1432]: time="2025-03-17T17:38:59.097450193Z" level=info msg="Ensure that sandbox 22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f in task-service has been cleanup successfully" Mar 17 17:38:59.098029 containerd[1432]: time="2025-03-17T17:38:59.097928979Z" level=info msg="TearDown network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" successfully" Mar 17 17:38:59.098029 containerd[1432]: time="2025-03-17T17:38:59.097950978Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" returns successfully" Mar 17 17:38:59.099928 containerd[1432]: time="2025-03-17T17:38:59.099029185Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:38:59.099928 containerd[1432]: time="2025-03-17T17:38:59.099117303Z" level=info msg="TearDown network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" successfully" Mar 17 17:38:59.099928 containerd[1432]: time="2025-03-17T17:38:59.099126622Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" returns successfully" Mar 17 17:38:59.099928 containerd[1432]: time="2025-03-17T17:38:59.099782642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:38:59.184957 systemd[1]: run-netns-cni\x2d7cc97221\x2df140\x2d6000\x2d9f83\x2d280501b56f6a.mount: Deactivated successfully. Mar 17 17:38:59.185051 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12-shm.mount: Deactivated successfully. Mar 17 17:38:59.185107 systemd[1]: run-netns-cni\x2d95439425\x2daee4\x2d59d5\x2dd586\x2d3d84c8373aad.mount: Deactivated successfully. Mar 17 17:38:59.185152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531-shm.mount: Deactivated successfully. Mar 17 17:38:59.185198 systemd[1]: run-netns-cni\x2d6943471a\x2d555c\x2d185a\x2d209c\x2d79b16282802b.mount: Deactivated successfully. Mar 17 17:38:59.185243 systemd[1]: run-netns-cni\x2d3b8a0209\x2d238c\x2d0ecd\x2d721b\x2dcab64a18a104.mount: Deactivated successfully. Mar 17 17:38:59.185293 systemd[1]: run-netns-cni\x2d827283a2\x2d9368\x2d7198\x2d440d\x2d6f7999310bfb.mount: Deactivated successfully. Mar 17 17:38:59.185335 systemd[1]: run-netns-cni\x2d2c0866a6\x2d2a58\x2d626e\x2dc999\x2d410538387d6e.mount: Deactivated successfully. Mar 17 17:38:59.373066 containerd[1432]: time="2025-03-17T17:38:59.373005852Z" level=error msg="Failed to destroy network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.373458 containerd[1432]: time="2025-03-17T17:38:59.373334002Z" level=error msg="encountered an error cleaning up failed sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.373458 containerd[1432]: time="2025-03-17T17:38:59.373395601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.373697 kubelet[2512]: E0317 17:38:59.373654 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.374071 kubelet[2512]: E0317 17:38:59.374048 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:59.374156 kubelet[2512]: E0317 17:38:59.374140 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:38:59.374283 kubelet[2512]: E0317 17:38:59.374257 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4wvdn" podUID="4bcadc09-d993-4b6b-a06f-decd561e1fef" Mar 17 17:38:59.379698 containerd[1432]: time="2025-03-17T17:38:59.379616331Z" level=error msg="Failed to destroy network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.380234 containerd[1432]: time="2025-03-17T17:38:59.380193474Z" level=error msg="encountered an error cleaning up failed sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.380296 containerd[1432]: time="2025-03-17T17:38:59.380255112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.380488 kubelet[2512]: E0317 17:38:59.380451 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.380592 kubelet[2512]: E0317 17:38:59.380576 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:59.380726 kubelet[2512]: E0317 17:38:59.380707 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:38:59.380851 kubelet[2512]: E0317 17:38:59.380826 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:38:59.384353 containerd[1432]: time="2025-03-17T17:38:59.384318708Z" level=error msg="Failed to destroy network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.385270 containerd[1432]: time="2025-03-17T17:38:59.385232801Z" level=error msg="encountered an error cleaning up failed sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.385379 containerd[1432]: time="2025-03-17T17:38:59.385353197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.385628 kubelet[2512]: E0317 17:38:59.385569 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.385688 kubelet[2512]: E0317 17:38:59.385656 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:59.385688 kubelet[2512]: E0317 17:38:59.385676 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:38:59.385809 kubelet[2512]: E0317 17:38:59.385767 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-smb5v" podUID="fb2cacc2-3045-4f05-a115-8af2b8c3ae93" Mar 17 17:38:59.408521 containerd[1432]: time="2025-03-17T17:38:59.408456494Z" level=error msg="Failed to destroy network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.409027 containerd[1432]: time="2025-03-17T17:38:59.408964719Z" level=error msg="encountered an error cleaning up failed sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.409090 containerd[1432]: time="2025-03-17T17:38:59.409046116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.409293 kubelet[2512]: E0317 17:38:59.409255 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.409345 kubelet[2512]: E0317 17:38:59.409313 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:59.409345 kubelet[2512]: E0317 17:38:59.409336 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:38:59.409413 kubelet[2512]: E0317 17:38:59.409382 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" podUID="99b568dd-b905-4754-b6a7-db2767a8c584" Mar 17 17:38:59.420326 containerd[1432]: time="2025-03-17T17:38:59.420282134Z" level=error msg="Failed to destroy network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.420981 containerd[1432]: time="2025-03-17T17:38:59.420943474Z" level=error msg="encountered an error cleaning up failed sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.421375 containerd[1432]: time="2025-03-17T17:38:59.421348422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.421760 kubelet[2512]: E0317 17:38:59.421726 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.421933 kubelet[2512]: E0317 17:38:59.421912 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:59.422052 kubelet[2512]: E0317 17:38:59.422023 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:38:59.422468 kubelet[2512]: E0317 17:38:59.422229 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" podUID="7dd0b36a-0dc5-4c34-a561-3245bd3255c4" Mar 17 17:38:59.423523 containerd[1432]: time="2025-03-17T17:38:59.423491957Z" level=error msg="Failed to destroy network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.424164 containerd[1432]: time="2025-03-17T17:38:59.424135857Z" level=error msg="encountered an error cleaning up failed sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.424298 containerd[1432]: time="2025-03-17T17:38:59.424275853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.424656 kubelet[2512]: E0317 17:38:59.424524 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:38:59.424721 kubelet[2512]: E0317 17:38:59.424612 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:59.424721 kubelet[2512]: E0317 17:38:59.424704 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:38:59.424782 kubelet[2512]: E0317 17:38:59.424758 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" podUID="f65e465b-a83f-4cf0-bb51-23b66ac6541f" Mar 17 17:39:00.101489 kubelet[2512]: I0317 17:39:00.101452 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e" Mar 17 17:39:00.103144 containerd[1432]: time="2025-03-17T17:39:00.102713339Z" level=info msg="StopPodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\"" Mar 17 17:39:00.103144 containerd[1432]: time="2025-03-17T17:39:00.102879294Z" level=info msg="Ensure that sandbox 367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e in task-service has been cleanup successfully" Mar 17 17:39:00.103144 containerd[1432]: time="2025-03-17T17:39:00.103063689Z" level=info msg="TearDown network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" successfully" Mar 17 17:39:00.103144 containerd[1432]: time="2025-03-17T17:39:00.103077969Z" level=info msg="StopPodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" returns successfully" Mar 17 17:39:00.104257 containerd[1432]: time="2025-03-17T17:39:00.104224495Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" Mar 17 17:39:00.104321 containerd[1432]: time="2025-03-17T17:39:00.104296293Z" level=info msg="TearDown network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" successfully" Mar 17 17:39:00.104321 containerd[1432]: time="2025-03-17T17:39:00.104306213Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" returns successfully" Mar 17 17:39:00.104694 containerd[1432]: time="2025-03-17T17:39:00.104665522Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:39:00.104741 containerd[1432]: time="2025-03-17T17:39:00.104729600Z" level=info msg="TearDown network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" successfully" Mar 17 17:39:00.104741 containerd[1432]: time="2025-03-17T17:39:00.104739080Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" returns successfully" Mar 17 17:39:00.105366 kubelet[2512]: I0317 17:39:00.105333 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f" Mar 17 17:39:00.106100 containerd[1432]: time="2025-03-17T17:39:00.105895966Z" level=info msg="StopPodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\"" Mar 17 17:39:00.106629 containerd[1432]: time="2025-03-17T17:39:00.106541547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:3,}" Mar 17 17:39:00.107663 containerd[1432]: time="2025-03-17T17:39:00.107029973Z" level=info msg="Ensure that sandbox db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f in task-service has been cleanup successfully" Mar 17 17:39:00.107853 kubelet[2512]: I0317 17:39:00.107680 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c" Mar 17 17:39:00.108531 containerd[1432]: time="2025-03-17T17:39:00.108379334Z" level=info msg="TearDown network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" successfully" Mar 17 17:39:00.108531 containerd[1432]: time="2025-03-17T17:39:00.108455852Z" level=info msg="StopPodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" returns successfully" Mar 17 17:39:00.108751 containerd[1432]: time="2025-03-17T17:39:00.108723804Z" level=info msg="StopPodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\"" Mar 17 17:39:00.108887 containerd[1432]: time="2025-03-17T17:39:00.108866720Z" level=info msg="Ensure that sandbox 28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c in task-service has been cleanup successfully" Mar 17 17:39:00.109080 containerd[1432]: time="2025-03-17T17:39:00.109056714Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" Mar 17 17:39:00.109153 containerd[1432]: time="2025-03-17T17:39:00.109138392Z" level=info msg="TearDown network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" successfully" Mar 17 17:39:00.109188 containerd[1432]: time="2025-03-17T17:39:00.109151471Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" returns successfully" Mar 17 17:39:00.109218 containerd[1432]: time="2025-03-17T17:39:00.109074353Z" level=info msg="TearDown network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" successfully" Mar 17 17:39:00.109218 containerd[1432]: time="2025-03-17T17:39:00.109201230Z" level=info msg="StopPodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" returns successfully" Mar 17 17:39:00.110341 containerd[1432]: time="2025-03-17T17:39:00.110143522Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" Mar 17 17:39:00.110341 containerd[1432]: time="2025-03-17T17:39:00.110205480Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:39:00.110341 containerd[1432]: time="2025-03-17T17:39:00.110225640Z" level=info msg="TearDown network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" successfully" Mar 17 17:39:00.110341 containerd[1432]: time="2025-03-17T17:39:00.110235880Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" returns successfully" Mar 17 17:39:00.110341 containerd[1432]: time="2025-03-17T17:39:00.110273038Z" level=info msg="TearDown network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" successfully" Mar 17 17:39:00.110341 containerd[1432]: time="2025-03-17T17:39:00.110283798Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" returns successfully" Mar 17 17:39:00.110779 containerd[1432]: time="2025-03-17T17:39:00.110740625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:39:00.111143 containerd[1432]: time="2025-03-17T17:39:00.111104134Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:39:00.111328 containerd[1432]: time="2025-03-17T17:39:00.111297089Z" level=info msg="TearDown network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" successfully" Mar 17 17:39:00.111365 kubelet[2512]: I0317 17:39:00.111332 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447" Mar 17 17:39:00.111436 containerd[1432]: time="2025-03-17T17:39:00.111420925Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" returns successfully" Mar 17 17:39:00.111884 containerd[1432]: time="2025-03-17T17:39:00.111856552Z" level=info msg="StopPodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\"" Mar 17 17:39:00.111933 kubelet[2512]: E0317 17:39:00.111910 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112024147Z" level=info msg="Ensure that sandbox 1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447 in task-service has been cleanup successfully" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112185703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:3,}" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112204902Z" level=info msg="TearDown network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" successfully" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112220342Z" level=info msg="StopPodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" returns successfully" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112500173Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112569611Z" level=info msg="TearDown network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" successfully" Mar 17 17:39:00.112665 containerd[1432]: time="2025-03-17T17:39:00.112579371Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" returns successfully" Mar 17 17:39:00.113071 containerd[1432]: time="2025-03-17T17:39:00.113041398Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:39:00.113144 containerd[1432]: time="2025-03-17T17:39:00.113124995Z" level=info msg="TearDown network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" successfully" Mar 17 17:39:00.113144 containerd[1432]: time="2025-03-17T17:39:00.113140875Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" returns successfully" Mar 17 17:39:00.113587 containerd[1432]: time="2025-03-17T17:39:00.113560182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:39:00.113898 kubelet[2512]: I0317 17:39:00.113874 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9" Mar 17 17:39:00.114600 containerd[1432]: time="2025-03-17T17:39:00.114323800Z" level=info msg="StopPodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\"" Mar 17 17:39:00.114600 containerd[1432]: time="2025-03-17T17:39:00.114469516Z" level=info msg="Ensure that sandbox 67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9 in task-service has been cleanup successfully" Mar 17 17:39:00.114863 containerd[1432]: time="2025-03-17T17:39:00.114744228Z" level=info msg="TearDown network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" successfully" Mar 17 17:39:00.114863 containerd[1432]: time="2025-03-17T17:39:00.114773507Z" level=info msg="StopPodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" returns successfully" Mar 17 17:39:00.115163 containerd[1432]: time="2025-03-17T17:39:00.115127577Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" Mar 17 17:39:00.115499 containerd[1432]: time="2025-03-17T17:39:00.115480286Z" level=info msg="TearDown network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" successfully" Mar 17 17:39:00.115609 containerd[1432]: time="2025-03-17T17:39:00.115572924Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" returns successfully" Mar 17 17:39:00.115905 kubelet[2512]: I0317 17:39:00.115882 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477" Mar 17 17:39:00.116591 containerd[1432]: time="2025-03-17T17:39:00.116414579Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:39:00.116591 containerd[1432]: time="2025-03-17T17:39:00.116455698Z" level=info msg="StopPodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\"" Mar 17 17:39:00.116591 containerd[1432]: time="2025-03-17T17:39:00.116497017Z" level=info msg="TearDown network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" successfully" Mar 17 17:39:00.116591 containerd[1432]: time="2025-03-17T17:39:00.116507416Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" returns successfully" Mar 17 17:39:00.116808 containerd[1432]: time="2025-03-17T17:39:00.116590694Z" level=info msg="Ensure that sandbox f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477 in task-service has been cleanup successfully" Mar 17 17:39:00.116808 containerd[1432]: time="2025-03-17T17:39:00.116760089Z" level=info msg="TearDown network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" successfully" Mar 17 17:39:00.116808 containerd[1432]: time="2025-03-17T17:39:00.116773169Z" level=info msg="StopPodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" returns successfully" Mar 17 17:39:00.116908 kubelet[2512]: E0317 17:39:00.116653 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:00.117119 containerd[1432]: time="2025-03-17T17:39:00.117083079Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" Mar 17 17:39:00.117365 containerd[1432]: time="2025-03-17T17:39:00.117113759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:3,}" Mar 17 17:39:00.118323 containerd[1432]: time="2025-03-17T17:39:00.118295724Z" level=info msg="TearDown network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" successfully" Mar 17 17:39:00.118383 containerd[1432]: time="2025-03-17T17:39:00.118369882Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" returns successfully" Mar 17 17:39:00.118884 containerd[1432]: time="2025-03-17T17:39:00.118861907Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:39:00.119124 containerd[1432]: time="2025-03-17T17:39:00.119040462Z" level=info msg="TearDown network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" successfully" Mar 17 17:39:00.119124 containerd[1432]: time="2025-03-17T17:39:00.119062062Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" returns successfully" Mar 17 17:39:00.119886 containerd[1432]: time="2025-03-17T17:39:00.119857998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:3,}" Mar 17 17:39:00.188116 systemd[1]: run-netns-cni\x2defbfd8e9\x2d0c3b\x2d0e41\x2d54b5\x2d8bf54aa6ba6a.mount: Deactivated successfully. Mar 17 17:39:00.188197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9-shm.mount: Deactivated successfully. Mar 17 17:39:00.188248 systemd[1]: run-netns-cni\x2d3c4d6c71\x2d3b6b\x2d0548\x2de962\x2d231678e3592f.mount: Deactivated successfully. Mar 17 17:39:00.188292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c-shm.mount: Deactivated successfully. Mar 17 17:39:00.188337 systemd[1]: run-netns-cni\x2db77480f1\x2d7048\x2d3e40\x2d0925\x2dcbae43b6e45f.mount: Deactivated successfully. Mar 17 17:39:00.188379 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477-shm.mount: Deactivated successfully. Mar 17 17:39:00.292868 containerd[1432]: time="2025-03-17T17:39:00.292761826Z" level=error msg="Failed to destroy network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.293166 containerd[1432]: time="2025-03-17T17:39:00.293126256Z" level=error msg="encountered an error cleaning up failed sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.293210 containerd[1432]: time="2025-03-17T17:39:00.293192934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.293605 kubelet[2512]: E0317 17:39:00.293419 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.293605 kubelet[2512]: E0317 17:39:00.293483 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:39:00.293605 kubelet[2512]: E0317 17:39:00.293504 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" Mar 17 17:39:00.293811 kubelet[2512]: E0317 17:39:00.293553 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5ffdc75-plj5f_calico-system(7dd0b36a-0dc5-4c34-a561-3245bd3255c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" podUID="7dd0b36a-0dc5-4c34-a561-3245bd3255c4" Mar 17 17:39:00.303081 containerd[1432]: time="2025-03-17T17:39:00.303011087Z" level=error msg="Failed to destroy network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.303496 containerd[1432]: time="2025-03-17T17:39:00.303462714Z" level=error msg="encountered an error cleaning up failed sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.303579 containerd[1432]: time="2025-03-17T17:39:00.303537232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.303799 kubelet[2512]: E0317 17:39:00.303755 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.303904 kubelet[2512]: E0317 17:39:00.303824 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:39:00.303904 kubelet[2512]: E0317 17:39:00.303849 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-smb5v" Mar 17 17:39:00.303904 kubelet[2512]: E0317 17:39:00.303886 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-smb5v_kube-system(fb2cacc2-3045-4f05-a115-8af2b8c3ae93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-smb5v" podUID="fb2cacc2-3045-4f05-a115-8af2b8c3ae93" Mar 17 17:39:00.314346 containerd[1432]: time="2025-03-17T17:39:00.314282358Z" level=error msg="Failed to destroy network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.315969 containerd[1432]: time="2025-03-17T17:39:00.315931469Z" level=error msg="encountered an error cleaning up failed sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.316038 containerd[1432]: time="2025-03-17T17:39:00.316020627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.318696 kubelet[2512]: E0317 17:39:00.316249 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.318933 kubelet[2512]: E0317 17:39:00.318721 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:39:00.319139 kubelet[2512]: E0317 17:39:00.318973 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbfxc" Mar 17 17:39:00.319328 kubelet[2512]: E0317 17:39:00.319293 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wbfxc_calico-system(cd6988e4-6af5-42c1-bd82-b51b176a8f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbfxc" podUID="cd6988e4-6af5-42c1-bd82-b51b176a8f5e" Mar 17 17:39:00.330289 containerd[1432]: time="2025-03-17T17:39:00.330239811Z" level=error msg="Failed to destroy network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.330606 containerd[1432]: time="2025-03-17T17:39:00.330579241Z" level=error msg="encountered an error cleaning up failed sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.330681 containerd[1432]: time="2025-03-17T17:39:00.330660599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.330926 kubelet[2512]: E0317 17:39:00.330895 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.330978 kubelet[2512]: E0317 17:39:00.330950 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:39:00.330978 kubelet[2512]: E0317 17:39:00.330969 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" Mar 17 17:39:00.331050 kubelet[2512]: E0317 17:39:00.331017 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-s4pv2_calico-apiserver(99b568dd-b905-4754-b6a7-db2767a8c584)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" podUID="99b568dd-b905-4754-b6a7-db2767a8c584" Mar 17 17:39:00.338154 containerd[1432]: time="2025-03-17T17:39:00.337444721Z" level=error msg="Failed to destroy network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.338561 containerd[1432]: time="2025-03-17T17:39:00.338522849Z" level=error msg="encountered an error cleaning up failed sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.338641 containerd[1432]: time="2025-03-17T17:39:00.338599887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.338867 kubelet[2512]: E0317 17:39:00.338825 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.338925 kubelet[2512]: E0317 17:39:00.338884 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:39:00.338925 kubelet[2512]: E0317 17:39:00.338910 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" Mar 17 17:39:00.338973 kubelet[2512]: E0317 17:39:00.338948 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b59c58749-tz5hx_calico-apiserver(f65e465b-a83f-4cf0-bb51-23b66ac6541f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" podUID="f65e465b-a83f-4cf0-bb51-23b66ac6541f" Mar 17 17:39:00.366729 containerd[1432]: time="2025-03-17T17:39:00.366582950Z" level=error msg="Failed to destroy network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.367096 containerd[1432]: time="2025-03-17T17:39:00.367066735Z" level=error msg="encountered an error cleaning up failed sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.367302 containerd[1432]: time="2025-03-17T17:39:00.367197372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.367497 kubelet[2512]: E0317 17:39:00.367440 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:00.367541 kubelet[2512]: E0317 17:39:00.367504 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:39:00.367541 kubelet[2512]: E0317 17:39:00.367524 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wvdn" Mar 17 17:39:00.367605 kubelet[2512]: E0317 17:39:00.367565 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4wvdn_kube-system(4bcadc09-d993-4b6b-a06f-decd561e1fef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4wvdn" podUID="4bcadc09-d993-4b6b-a06f-decd561e1fef" Mar 17 17:39:00.457437 containerd[1432]: time="2025-03-17T17:39:00.457373217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:00.471091 containerd[1432]: time="2025-03-17T17:39:00.471038978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:39:00.475661 containerd[1432]: time="2025-03-17T17:39:00.475589285Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:00.477483 containerd[1432]: time="2025-03-17T17:39:00.477456630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:00.478100 containerd[1432]: time="2025-03-17T17:39:00.478020574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.417872335s" Mar 17 17:39:00.478100 containerd[1432]: time="2025-03-17T17:39:00.478053053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:39:00.484656 containerd[1432]: time="2025-03-17T17:39:00.484556183Z" level=info msg="CreateContainer within sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:39:00.516592 containerd[1432]: time="2025-03-17T17:39:00.516492049Z" level=info msg="CreateContainer within sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\"" Mar 17 17:39:00.519250 containerd[1432]: time="2025-03-17T17:39:00.517796691Z" level=info msg="StartContainer for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\"" Mar 17 17:39:00.576784 systemd[1]: Started cri-containerd-a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e.scope - libcontainer container a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e. Mar 17 17:39:00.604483 containerd[1432]: time="2025-03-17T17:39:00.604436960Z" level=info msg="StartContainer for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" returns successfully" Mar 17 17:39:00.786951 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:39:00.788971 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:39:01.122129 kubelet[2512]: E0317 17:39:01.121433 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:01.123606 kubelet[2512]: I0317 17:39:01.123584 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761" Mar 17 17:39:01.125263 containerd[1432]: time="2025-03-17T17:39:01.124873652Z" level=info msg="StopPodSandbox for \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\"" Mar 17 17:39:01.125263 containerd[1432]: time="2025-03-17T17:39:01.125062967Z" level=info msg="Ensure that sandbox 20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761 in task-service has been cleanup successfully" Mar 17 17:39:01.125263 containerd[1432]: time="2025-03-17T17:39:01.125238682Z" level=info msg="TearDown network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\" successfully" Mar 17 17:39:01.125263 containerd[1432]: time="2025-03-17T17:39:01.125251361Z" level=info msg="StopPodSandbox for \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\" returns successfully" Mar 17 17:39:01.127783 containerd[1432]: time="2025-03-17T17:39:01.127751611Z" level=info msg="StopPodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\"" Mar 17 17:39:01.127861 containerd[1432]: time="2025-03-17T17:39:01.127831569Z" level=info msg="TearDown network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" successfully" Mar 17 17:39:01.127861 containerd[1432]: time="2025-03-17T17:39:01.127841729Z" level=info msg="StopPodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" returns successfully" Mar 17 17:39:01.128831 containerd[1432]: time="2025-03-17T17:39:01.128694945Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" Mar 17 17:39:01.128831 containerd[1432]: time="2025-03-17T17:39:01.128783022Z" level=info msg="TearDown network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" successfully" Mar 17 17:39:01.128831 containerd[1432]: time="2025-03-17T17:39:01.128792982Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" returns successfully" Mar 17 17:39:01.129186 containerd[1432]: time="2025-03-17T17:39:01.129101453Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:39:01.129186 containerd[1432]: time="2025-03-17T17:39:01.129169491Z" level=info msg="TearDown network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" successfully" Mar 17 17:39:01.129186 containerd[1432]: time="2025-03-17T17:39:01.129178131Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" returns successfully" Mar 17 17:39:01.130151 containerd[1432]: time="2025-03-17T17:39:01.130122664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:4,}" Mar 17 17:39:01.130580 kubelet[2512]: I0317 17:39:01.130546 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54" Mar 17 17:39:01.131962 containerd[1432]: time="2025-03-17T17:39:01.131076398Z" level=info msg="StopPodSandbox for \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\"" Mar 17 17:39:01.131962 containerd[1432]: time="2025-03-17T17:39:01.131830896Z" level=info msg="Ensure that sandbox 97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54 in task-service has been cleanup successfully" Mar 17 17:39:01.133010 containerd[1432]: time="2025-03-17T17:39:01.132970224Z" level=info msg="TearDown network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\" successfully" Mar 17 17:39:01.133010 containerd[1432]: time="2025-03-17T17:39:01.133001824Z" level=info msg="StopPodSandbox for \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\" returns successfully" Mar 17 17:39:01.133615 containerd[1432]: time="2025-03-17T17:39:01.133564328Z" level=info msg="StopPodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\"" Mar 17 17:39:01.138094 containerd[1432]: time="2025-03-17T17:39:01.138011923Z" level=info msg="TearDown network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" successfully" Mar 17 17:39:01.138094 containerd[1432]: time="2025-03-17T17:39:01.138033282Z" level=info msg="StopPodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" returns successfully" Mar 17 17:39:01.140739 containerd[1432]: time="2025-03-17T17:39:01.140710127Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" Mar 17 17:39:01.140816 containerd[1432]: time="2025-03-17T17:39:01.140789245Z" level=info msg="TearDown network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" successfully" Mar 17 17:39:01.140816 containerd[1432]: time="2025-03-17T17:39:01.140799605Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" returns successfully" Mar 17 17:39:01.142249 containerd[1432]: time="2025-03-17T17:39:01.141941652Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:39:01.142327 kubelet[2512]: I0317 17:39:01.142302 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc" Mar 17 17:39:01.143927 containerd[1432]: time="2025-03-17T17:39:01.143900237Z" level=info msg="StopPodSandbox for \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\"" Mar 17 17:39:01.144459 containerd[1432]: time="2025-03-17T17:39:01.144387744Z" level=info msg="Ensure that sandbox 0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc in task-service has been cleanup successfully" Mar 17 17:39:01.144940 containerd[1432]: time="2025-03-17T17:39:01.144918529Z" level=info msg="TearDown network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" successfully" Mar 17 17:39:01.145098 containerd[1432]: time="2025-03-17T17:39:01.145061605Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" returns successfully" Mar 17 17:39:01.145997 containerd[1432]: time="2025-03-17T17:39:01.145967179Z" level=info msg="TearDown network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\" successfully" Mar 17 17:39:01.146902 containerd[1432]: time="2025-03-17T17:39:01.146716478Z" level=info msg="StopPodSandbox for \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\" returns successfully" Mar 17 17:39:01.146902 containerd[1432]: time="2025-03-17T17:39:01.146059497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:4,}" Mar 17 17:39:01.147563 containerd[1432]: time="2025-03-17T17:39:01.147529895Z" level=info msg="StopPodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\"" Mar 17 17:39:01.147759 containerd[1432]: time="2025-03-17T17:39:01.147742009Z" level=info msg="TearDown network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" successfully" Mar 17 17:39:01.147869 containerd[1432]: time="2025-03-17T17:39:01.147826007Z" level=info msg="StopPodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" returns successfully" Mar 17 17:39:01.148736 kubelet[2512]: I0317 17:39:01.148710 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc" Mar 17 17:39:01.150099 containerd[1432]: time="2025-03-17T17:39:01.149680955Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" Mar 17 17:39:01.150099 containerd[1432]: time="2025-03-17T17:39:01.149774512Z" level=info msg="TearDown network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" successfully" Mar 17 17:39:01.150099 containerd[1432]: time="2025-03-17T17:39:01.149783912Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" returns successfully" Mar 17 17:39:01.150099 containerd[1432]: time="2025-03-17T17:39:01.149894589Z" level=info msg="StopPodSandbox for \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\"" Mar 17 17:39:01.150099 containerd[1432]: time="2025-03-17T17:39:01.150024145Z" level=info msg="Ensure that sandbox df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc in task-service has been cleanup successfully" Mar 17 17:39:01.152671 containerd[1432]: time="2025-03-17T17:39:01.152636552Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:39:01.152906 containerd[1432]: time="2025-03-17T17:39:01.152785268Z" level=info msg="TearDown network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" successfully" Mar 17 17:39:01.152906 containerd[1432]: time="2025-03-17T17:39:01.152800427Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" returns successfully" Mar 17 17:39:01.153327 containerd[1432]: time="2025-03-17T17:39:01.153302013Z" level=info msg="TearDown network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\" successfully" Mar 17 17:39:01.153416 containerd[1432]: time="2025-03-17T17:39:01.153332492Z" level=info msg="StopPodSandbox for \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\" returns successfully" Mar 17 17:39:01.154894 containerd[1432]: time="2025-03-17T17:39:01.154820331Z" level=info msg="StopPodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\"" Mar 17 17:39:01.156470 containerd[1432]: time="2025-03-17T17:39:01.156042736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:39:01.156915 containerd[1432]: time="2025-03-17T17:39:01.156824154Z" level=info msg="TearDown network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" successfully" Mar 17 17:39:01.156994 containerd[1432]: time="2025-03-17T17:39:01.156912672Z" level=info msg="StopPodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" returns successfully" Mar 17 17:39:01.157762 containerd[1432]: time="2025-03-17T17:39:01.157723209Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" Mar 17 17:39:01.158072 containerd[1432]: time="2025-03-17T17:39:01.157906204Z" level=info msg="TearDown network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" successfully" Mar 17 17:39:01.158072 containerd[1432]: time="2025-03-17T17:39:01.157923763Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" returns successfully" Mar 17 17:39:01.160328 containerd[1432]: time="2025-03-17T17:39:01.160288617Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:39:01.160417 containerd[1432]: time="2025-03-17T17:39:01.160397774Z" level=info msg="TearDown network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" successfully" Mar 17 17:39:01.160417 containerd[1432]: time="2025-03-17T17:39:01.160412773Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" returns successfully" Mar 17 17:39:01.160971 kubelet[2512]: E0317 17:39:01.160762 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:01.163387 containerd[1432]: time="2025-03-17T17:39:01.163354091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:4,}" Mar 17 17:39:01.167136 kubelet[2512]: I0317 17:39:01.166468 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5" Mar 17 17:39:01.168254 containerd[1432]: time="2025-03-17T17:39:01.167731448Z" level=info msg="StopPodSandbox for \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\"" Mar 17 17:39:01.168254 containerd[1432]: time="2025-03-17T17:39:01.167898083Z" level=info msg="Ensure that sandbox c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5 in task-service has been cleanup successfully" Mar 17 17:39:01.168313 containerd[1432]: time="2025-03-17T17:39:01.168264673Z" level=info msg="TearDown network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\" successfully" Mar 17 17:39:01.168313 containerd[1432]: time="2025-03-17T17:39:01.168279632Z" level=info msg="StopPodSandbox for \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\" returns successfully" Mar 17 17:39:01.171805 containerd[1432]: time="2025-03-17T17:39:01.171010556Z" level=info msg="StopPodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\"" Mar 17 17:39:01.172155 containerd[1432]: time="2025-03-17T17:39:01.172072126Z" level=info msg="TearDown network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" successfully" Mar 17 17:39:01.172155 containerd[1432]: time="2025-03-17T17:39:01.172092525Z" level=info msg="StopPodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" returns successfully" Mar 17 17:39:01.172945 containerd[1432]: time="2025-03-17T17:39:01.172923822Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" Mar 17 17:39:01.173232 containerd[1432]: time="2025-03-17T17:39:01.173212134Z" level=info msg="TearDown network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" successfully" Mar 17 17:39:01.173294 containerd[1432]: time="2025-03-17T17:39:01.173282972Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" returns successfully" Mar 17 17:39:01.174601 containerd[1432]: time="2025-03-17T17:39:01.174570976Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:39:01.175183 kubelet[2512]: I0317 17:39:01.175154 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410" Mar 17 17:39:01.175459 containerd[1432]: time="2025-03-17T17:39:01.175351354Z" level=info msg="TearDown network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" successfully" Mar 17 17:39:01.175459 containerd[1432]: time="2025-03-17T17:39:01.175397432Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" returns successfully" Mar 17 17:39:01.175864 kubelet[2512]: E0317 17:39:01.175730 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:01.177127 containerd[1432]: time="2025-03-17T17:39:01.176188810Z" level=info msg="StopPodSandbox for \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\"" Mar 17 17:39:01.177127 containerd[1432]: time="2025-03-17T17:39:01.176558720Z" level=info msg="Ensure that sandbox fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410 in task-service has been cleanup successfully" Mar 17 17:39:01.177127 containerd[1432]: time="2025-03-17T17:39:01.176689756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:4,}" Mar 17 17:39:01.177127 containerd[1432]: time="2025-03-17T17:39:01.177118224Z" level=info msg="TearDown network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\" successfully" Mar 17 17:39:01.177356 containerd[1432]: time="2025-03-17T17:39:01.177133864Z" level=info msg="StopPodSandbox for \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\" returns successfully" Mar 17 17:39:01.177989 containerd[1432]: time="2025-03-17T17:39:01.177942481Z" level=info msg="StopPodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\"" Mar 17 17:39:01.178226 containerd[1432]: time="2025-03-17T17:39:01.178208593Z" level=info msg="TearDown network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" successfully" Mar 17 17:39:01.178302 containerd[1432]: time="2025-03-17T17:39:01.178289231Z" level=info msg="StopPodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" returns successfully" Mar 17 17:39:01.179289 containerd[1432]: time="2025-03-17T17:39:01.179205125Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" Mar 17 17:39:01.179355 containerd[1432]: time="2025-03-17T17:39:01.179328642Z" level=info msg="TearDown network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" successfully" Mar 17 17:39:01.179355 containerd[1432]: time="2025-03-17T17:39:01.179340002Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" returns successfully" Mar 17 17:39:01.180062 containerd[1432]: time="2025-03-17T17:39:01.180032342Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:39:01.180131 containerd[1432]: time="2025-03-17T17:39:01.180111900Z" level=info msg="TearDown network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" successfully" Mar 17 17:39:01.180131 containerd[1432]: time="2025-03-17T17:39:01.180121780Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" returns successfully" Mar 17 17:39:01.181224 containerd[1432]: time="2025-03-17T17:39:01.180837440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:39:01.191382 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc-shm.mount: Deactivated successfully. Mar 17 17:39:01.191466 systemd[1]: run-netns-cni\x2d602530a2\x2d4b15\x2d8b47\x2d0f98\x2de0d89e7a3447.mount: Deactivated successfully. Mar 17 17:39:01.191515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761-shm.mount: Deactivated successfully. Mar 17 17:39:01.191563 systemd[1]: run-netns-cni\x2d6af9c9da\x2d916a\x2dff17\x2d9555\x2de4fd489ba7bf.mount: Deactivated successfully. Mar 17 17:39:01.191608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54-shm.mount: Deactivated successfully. Mar 17 17:39:01.191665 systemd[1]: run-netns-cni\x2d09b6eee1\x2d3efd\x2d5091\x2d75e8\x2d702010e87007.mount: Deactivated successfully. Mar 17 17:39:01.191709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc-shm.mount: Deactivated successfully. Mar 17 17:39:01.191756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990955049.mount: Deactivated successfully. Mar 17 17:39:01.593329 systemd-networkd[1378]: cali197c2fd2c68: Link UP Mar 17 17:39:01.593607 systemd-networkd[1378]: cali197c2fd2c68: Gained carrier Mar 17 17:39:01.619088 kubelet[2512]: I0317 17:39:01.618885 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tp8wk" podStartSLOduration=2.307707683 podStartE2EDuration="14.618866493s" podCreationTimestamp="2025-03-17 17:38:47 +0000 UTC" firstStartedPulling="2025-03-17 17:38:48.167568863 +0000 UTC m=+14.291536522" lastFinishedPulling="2025-03-17 17:39:00.478727673 +0000 UTC m=+26.602695332" observedRunningTime="2025-03-17 17:39:01.145798464 +0000 UTC m=+27.269766123" watchObservedRunningTime="2025-03-17 17:39:01.618866493 +0000 UTC m=+27.742834152" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.236 [INFO][4263] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.305 [INFO][4263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0 calico-apiserver-6b59c58749- calico-apiserver 99b568dd-b905-4754-b6a7-db2767a8c584 721 0 2025-03-17 17:38:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b59c58749 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b59c58749-s4pv2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali197c2fd2c68 [] []}} ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.305 [INFO][4263] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.530 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" HandleID="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Workload="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.551 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" HandleID="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Workload="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e7800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b59c58749-s4pv2", "timestamp":"2025-03-17 17:39:01.530603053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.551 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.552 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.552 [INFO][4336] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.554 [INFO][4336] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.563 [INFO][4336] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.567 [INFO][4336] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.569 [INFO][4336] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.571 [INFO][4336] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.571 [INFO][4336] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.572 [INFO][4336] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4 Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.576 [INFO][4336] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.582 [INFO][4336] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.582 [INFO][4336] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" host="localhost" Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.582 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:01.620843 containerd[1432]: 2025-03-17 17:39:01.582 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" HandleID="k8s-pod-network.78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Workload="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.621569 containerd[1432]: 2025-03-17 17:39:01.586 [INFO][4263] cni-plugin/k8s.go 386: Populated endpoint ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0", GenerateName:"calico-apiserver-6b59c58749-", Namespace:"calico-apiserver", SelfLink:"", UID:"99b568dd-b905-4754-b6a7-db2767a8c584", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b59c58749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b59c58749-s4pv2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali197c2fd2c68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.621569 containerd[1432]: 2025-03-17 17:39:01.586 [INFO][4263] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.621569 containerd[1432]: 2025-03-17 17:39:01.586 [INFO][4263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali197c2fd2c68 ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.621569 containerd[1432]: 2025-03-17 17:39:01.596 [INFO][4263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.621569 containerd[1432]: 2025-03-17 17:39:01.596 [INFO][4263] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0", GenerateName:"calico-apiserver-6b59c58749-", Namespace:"calico-apiserver", SelfLink:"", UID:"99b568dd-b905-4754-b6a7-db2767a8c584", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b59c58749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4", Pod:"calico-apiserver-6b59c58749-s4pv2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali197c2fd2c68", MAC:"ee:19:34:76:ca:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.621569 containerd[1432]: 2025-03-17 17:39:01.617 [INFO][4263] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-s4pv2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--s4pv2-eth0" Mar 17 17:39:01.652412 containerd[1432]: time="2025-03-17T17:39:01.652328673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:01.652412 containerd[1432]: time="2025-03-17T17:39:01.652377951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:01.652412 containerd[1432]: time="2025-03-17T17:39:01.652391431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.652576 containerd[1432]: time="2025-03-17T17:39:01.652466829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.673782 systemd[1]: Started cri-containerd-78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4.scope - libcontainer container 78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4. Mar 17 17:39:01.689749 systemd-networkd[1378]: cali3453345cf78: Link UP Mar 17 17:39:01.689932 systemd-networkd[1378]: cali3453345cf78: Gained carrier Mar 17 17:39:01.695715 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.190 [INFO][4240] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.306 [INFO][4240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wbfxc-eth0 csi-node-driver- calico-system cd6988e4-6af5-42c1-bd82-b51b176a8f5e 639 0 2025-03-17 17:38:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wbfxc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3453345cf78 [] []}} ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.306 [INFO][4240] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.530 [INFO][4329] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" HandleID="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Workload="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.551 [INFO][4329] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" HandleID="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Workload="localhost-k8s-csi--node--driver--wbfxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wbfxc", "timestamp":"2025-03-17 17:39:01.530134826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.551 [INFO][4329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.582 [INFO][4329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.582 [INFO][4329] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.655 [INFO][4329] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.658 [INFO][4329] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.668 [INFO][4329] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.670 [INFO][4329] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.672 [INFO][4329] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.672 [INFO][4329] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.673 [INFO][4329] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24 Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.677 [INFO][4329] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.681 [INFO][4329] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.682 [INFO][4329] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" host="localhost" Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.682 [INFO][4329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:01.705498 containerd[1432]: 2025-03-17 17:39:01.682 [INFO][4329] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" HandleID="k8s-pod-network.05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Workload="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.706085 containerd[1432]: 2025-03-17 17:39:01.686 [INFO][4240] cni-plugin/k8s.go 386: Populated endpoint ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wbfxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd6988e4-6af5-42c1-bd82-b51b176a8f5e", ResourceVersion:"639", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wbfxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3453345cf78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.706085 containerd[1432]: 2025-03-17 17:39:01.686 [INFO][4240] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.706085 containerd[1432]: 2025-03-17 17:39:01.686 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3453345cf78 ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.706085 containerd[1432]: 2025-03-17 17:39:01.688 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.706085 containerd[1432]: 2025-03-17 17:39:01.688 [INFO][4240] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wbfxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd6988e4-6af5-42c1-bd82-b51b176a8f5e", ResourceVersion:"639", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24", Pod:"csi-node-driver-wbfxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3453345cf78", MAC:"f6:84:37:31:c3:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.706085 containerd[1432]: 2025-03-17 17:39:01.701 [INFO][4240] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24" Namespace="calico-system" Pod="csi-node-driver-wbfxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbfxc-eth0" Mar 17 17:39:01.724675 containerd[1432]: time="2025-03-17T17:39:01.724612042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-s4pv2,Uid:99b568dd-b905-4754-b6a7-db2767a8c584,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4\"" Mar 17 17:39:01.727895 containerd[1432]: time="2025-03-17T17:39:01.727858430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:39:01.732299 containerd[1432]: time="2025-03-17T17:39:01.732212708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:01.732299 containerd[1432]: time="2025-03-17T17:39:01.732284386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:01.732299 containerd[1432]: time="2025-03-17T17:39:01.732295586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.732444 containerd[1432]: time="2025-03-17T17:39:01.732372984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.754853 systemd[1]: Started cri-containerd-05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24.scope - libcontainer container 05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24. Mar 17 17:39:01.777560 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:39:01.791105 kubelet[2512]: I0317 17:39:01.790940 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:01.791481 kubelet[2512]: E0317 17:39:01.791403 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:01.814654 containerd[1432]: time="2025-03-17T17:39:01.814298602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbfxc,Uid:cd6988e4-6af5-42c1-bd82-b51b176a8f5e,Namespace:calico-system,Attempt:4,} returns sandbox id \"05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24\"" Mar 17 17:39:01.832235 systemd-networkd[1378]: cali960731f6e3f: Link UP Mar 17 17:39:01.834459 systemd-networkd[1378]: cali960731f6e3f: Gained carrier Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.266 [INFO][4278] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.305 [INFO][4278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--smb5v-eth0 coredns-6f6b679f8f- kube-system fb2cacc2-3045-4f05-a115-8af2b8c3ae93 716 0 2025-03-17 17:38:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-smb5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali960731f6e3f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.305 [INFO][4278] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.535 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" HandleID="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Workload="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.555 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" HandleID="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Workload="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004afa10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-smb5v", "timestamp":"2025-03-17 17:39:01.535958982 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.556 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.682 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.682 [INFO][4344] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.755 [INFO][4344] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.769 [INFO][4344] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.787 [INFO][4344] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.790 [INFO][4344] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.799 [INFO][4344] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.799 [INFO][4344] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.803 [INFO][4344] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.815 [INFO][4344] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.823 [INFO][4344] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.824 [INFO][4344] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" host="localhost" Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.824 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:01.861209 containerd[1432]: 2025-03-17 17:39:01.824 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" HandleID="k8s-pod-network.8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Workload="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.862017 containerd[1432]: 2025-03-17 17:39:01.828 [INFO][4278] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--smb5v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fb2cacc2-3045-4f05-a115-8af2b8c3ae93", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-smb5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali960731f6e3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.862017 containerd[1432]: 2025-03-17 17:39:01.829 [INFO][4278] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.862017 containerd[1432]: 2025-03-17 17:39:01.829 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali960731f6e3f ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.862017 containerd[1432]: 2025-03-17 17:39:01.834 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.862017 containerd[1432]: 2025-03-17 17:39:01.834 [INFO][4278] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--smb5v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fb2cacc2-3045-4f05-a115-8af2b8c3ae93", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df", Pod:"coredns-6f6b679f8f-smb5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali960731f6e3f", MAC:"16:3e:cf:60:02:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.862017 containerd[1432]: 2025-03-17 17:39:01.859 [INFO][4278] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df" Namespace="kube-system" Pod="coredns-6f6b679f8f-smb5v" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--smb5v-eth0" Mar 17 17:39:01.892119 containerd[1432]: time="2025-03-17T17:39:01.892028218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:01.892119 containerd[1432]: time="2025-03-17T17:39:01.892091416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:01.892119 containerd[1432]: time="2025-03-17T17:39:01.892102176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.892338 containerd[1432]: time="2025-03-17T17:39:01.892176654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.918808 systemd[1]: Started cri-containerd-8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df.scope - libcontainer container 8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df. Mar 17 17:39:01.933832 systemd-networkd[1378]: cali212b5e08e92: Link UP Mar 17 17:39:01.934052 systemd-networkd[1378]: cali212b5e08e92: Gained carrier Mar 17 17:39:01.941005 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.219 [INFO][4253] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.303 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0 calico-kube-controllers-ff5ffdc75- calico-system 7dd0b36a-0dc5-4c34-a561-3245bd3255c4 724 0 2025-03-17 17:38:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ff5ffdc75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-ff5ffdc75-plj5f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali212b5e08e92 [] []}} ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.303 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.544 [INFO][4327] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" HandleID="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Workload="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.557 [INFO][4327] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" HandleID="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Workload="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400045ac80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-ff5ffdc75-plj5f", "timestamp":"2025-03-17 17:39:01.544399825 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.557 [INFO][4327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.824 [INFO][4327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.824 [INFO][4327] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.856 [INFO][4327] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.874 [INFO][4327] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.899 [INFO][4327] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.909 [INFO][4327] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.913 [INFO][4327] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.914 [INFO][4327] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.915 [INFO][4327] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7 Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.921 [INFO][4327] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.927 [INFO][4327] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.927 [INFO][4327] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" host="localhost" Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.927 [INFO][4327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:01.947287 containerd[1432]: 2025-03-17 17:39:01.927 [INFO][4327] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" HandleID="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Workload="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.947865 containerd[1432]: 2025-03-17 17:39:01.929 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0", GenerateName:"calico-kube-controllers-ff5ffdc75-", Namespace:"calico-system", SelfLink:"", UID:"7dd0b36a-0dc5-4c34-a561-3245bd3255c4", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ff5ffdc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-ff5ffdc75-plj5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali212b5e08e92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.947865 containerd[1432]: 2025-03-17 17:39:01.929 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.947865 containerd[1432]: 2025-03-17 17:39:01.929 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali212b5e08e92 ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.947865 containerd[1432]: 2025-03-17 17:39:01.934 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.947865 containerd[1432]: 2025-03-17 17:39:01.934 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0", GenerateName:"calico-kube-controllers-ff5ffdc75-", Namespace:"calico-system", SelfLink:"", UID:"7dd0b36a-0dc5-4c34-a561-3245bd3255c4", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ff5ffdc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7", Pod:"calico-kube-controllers-ff5ffdc75-plj5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali212b5e08e92", MAC:"82:49:93:3d:35:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:01.947865 containerd[1432]: 2025-03-17 17:39:01.944 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Namespace="calico-system" Pod="calico-kube-controllers-ff5ffdc75-plj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:01.965733 containerd[1432]: time="2025-03-17T17:39:01.965540433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-smb5v,Uid:fb2cacc2-3045-4f05-a115-8af2b8c3ae93,Namespace:kube-system,Attempt:4,} returns sandbox id \"8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df\"" Mar 17 17:39:01.966686 kubelet[2512]: E0317 17:39:01.966586 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:01.968921 containerd[1432]: time="2025-03-17T17:39:01.968885459Z" level=info msg="CreateContainer within sandbox \"8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:39:01.971543 containerd[1432]: time="2025-03-17T17:39:01.971002879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:01.971764 containerd[1432]: time="2025-03-17T17:39:01.971571023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:01.971764 containerd[1432]: time="2025-03-17T17:39:01.971598102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.971894 containerd[1432]: time="2025-03-17T17:39:01.971813376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:01.989351 containerd[1432]: time="2025-03-17T17:39:01.989240087Z" level=info msg="CreateContainer within sandbox \"8e60592c749cabc2961d52a3d0e4b2cf40f0fc1087d8a78c5c2593bc3d78a5df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56d28d48f2bc8227d6670ad7e05524abebc141e104ef46a5f81ce165a1907707\"" Mar 17 17:39:01.989937 containerd[1432]: time="2025-03-17T17:39:01.989818790Z" level=info msg="StartContainer for \"56d28d48f2bc8227d6670ad7e05524abebc141e104ef46a5f81ce165a1907707\"" Mar 17 17:39:01.991799 systemd[1]: Started cri-containerd-c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7.scope - libcontainer container c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7. Mar 17 17:39:02.008713 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:39:02.019282 systemd-networkd[1378]: cali967b90a0dd0: Link UP Mar 17 17:39:02.019699 systemd-networkd[1378]: cali967b90a0dd0: Gained carrier Mar 17 17:39:02.034990 systemd[1]: Started cri-containerd-56d28d48f2bc8227d6670ad7e05524abebc141e104ef46a5f81ce165a1907707.scope - libcontainer container 56d28d48f2bc8227d6670ad7e05524abebc141e104ef46a5f81ce165a1907707. Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.288 [INFO][4302] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.312 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0 coredns-6f6b679f8f- kube-system 4bcadc09-d993-4b6b-a06f-decd561e1fef 723 0 2025-03-17 17:38:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-4wvdn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali967b90a0dd0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.312 [INFO][4302] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.542 [INFO][4334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" HandleID="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Workload="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.557 [INFO][4334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" HandleID="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Workload="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003d0180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-4wvdn", "timestamp":"2025-03-17 17:39:01.542399241 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.558 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.927 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.927 [INFO][4334] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.957 [INFO][4334] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.974 [INFO][4334] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.991 [INFO][4334] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.995 [INFO][4334] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.998 [INFO][4334] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:01.998 [INFO][4334] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:02.000 [INFO][4334] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:02.006 [INFO][4334] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:02.013 [INFO][4334] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:02.013 [INFO][4334] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" host="localhost" Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:02.013 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:02.044916 containerd[1432]: 2025-03-17 17:39:02.013 [INFO][4334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" HandleID="k8s-pod-network.bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Workload="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.045456 containerd[1432]: 2025-03-17 17:39:02.017 [INFO][4302] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4bcadc09-d993-4b6b-a06f-decd561e1fef", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-4wvdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali967b90a0dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:02.045456 containerd[1432]: 2025-03-17 17:39:02.018 [INFO][4302] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.045456 containerd[1432]: 2025-03-17 17:39:02.018 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali967b90a0dd0 ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.045456 containerd[1432]: 2025-03-17 17:39:02.019 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.045456 containerd[1432]: 2025-03-17 17:39:02.020 [INFO][4302] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4bcadc09-d993-4b6b-a06f-decd561e1fef", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a", Pod:"coredns-6f6b679f8f-4wvdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali967b90a0dd0", MAC:"b6:20:2e:83:19:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:02.045456 containerd[1432]: 2025-03-17 17:39:02.032 [INFO][4302] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wvdn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4wvdn-eth0" Mar 17 17:39:02.058431 containerd[1432]: time="2025-03-17T17:39:02.058343725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5ffdc75-plj5f,Uid:7dd0b36a-0dc5-4c34-a561-3245bd3255c4,Namespace:calico-system,Attempt:4,} returns sandbox id \"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7\"" Mar 17 17:39:02.102283 containerd[1432]: time="2025-03-17T17:39:02.096141663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:02.102283 containerd[1432]: time="2025-03-17T17:39:02.098418681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:02.102283 containerd[1432]: time="2025-03-17T17:39:02.098435041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:02.102283 containerd[1432]: time="2025-03-17T17:39:02.098809310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:02.116888 containerd[1432]: time="2025-03-17T17:39:02.116170921Z" level=info msg="StartContainer for \"56d28d48f2bc8227d6670ad7e05524abebc141e104ef46a5f81ce165a1907707\" returns successfully" Mar 17 17:39:02.128513 systemd-networkd[1378]: calic52757b785f: Link UP Mar 17 17:39:02.129703 systemd-networkd[1378]: calic52757b785f: Gained carrier Mar 17 17:39:02.144123 systemd[1]: Started cri-containerd-bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a.scope - libcontainer container bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a. Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:01.271 [INFO][4290] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:01.303 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0 calico-apiserver-6b59c58749- calico-apiserver f65e465b-a83f-4cf0-bb51-23b66ac6541f 722 0 2025-03-17 17:38:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b59c58749 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b59c58749-tz5hx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic52757b785f [] []}} ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:01.303 [INFO][4290] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:01.546 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" HandleID="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Workload="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:01.561 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" HandleID="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Workload="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000355b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b59c58749-tz5hx", "timestamp":"2025-03-17 17:39:01.546608843 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:01.561 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.013 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.013 [INFO][4331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.059 [INFO][4331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.075 [INFO][4331] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.090 [INFO][4331] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.094 [INFO][4331] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.099 [INFO][4331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.100 [INFO][4331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.103 [INFO][4331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.108 [INFO][4331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.118 [INFO][4331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.118 [INFO][4331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" host="localhost" Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.119 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:02.147717 containerd[1432]: 2025-03-17 17:39:02.119 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" HandleID="k8s-pod-network.d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Workload="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.148242 containerd[1432]: 2025-03-17 17:39:02.121 [INFO][4290] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0", GenerateName:"calico-apiserver-6b59c58749-", Namespace:"calico-apiserver", SelfLink:"", UID:"f65e465b-a83f-4cf0-bb51-23b66ac6541f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b59c58749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b59c58749-tz5hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52757b785f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:02.148242 containerd[1432]: 2025-03-17 17:39:02.123 [INFO][4290] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.148242 containerd[1432]: 2025-03-17 17:39:02.123 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic52757b785f ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.148242 containerd[1432]: 2025-03-17 17:39:02.128 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.148242 containerd[1432]: 2025-03-17 17:39:02.129 [INFO][4290] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0", GenerateName:"calico-apiserver-6b59c58749-", Namespace:"calico-apiserver", SelfLink:"", UID:"f65e465b-a83f-4cf0-bb51-23b66ac6541f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b59c58749", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a", Pod:"calico-apiserver-6b59c58749-tz5hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52757b785f", MAC:"72:b1:8f:e8:2d:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:02.148242 containerd[1432]: 2025-03-17 17:39:02.142 [INFO][4290] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a" Namespace="calico-apiserver" Pod="calico-apiserver-6b59c58749-tz5hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b59c58749--tz5hx-eth0" Mar 17 17:39:02.158494 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:39:02.208241 containerd[1432]: time="2025-03-17T17:39:02.205230672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:02.208241 containerd[1432]: time="2025-03-17T17:39:02.205290991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:02.208241 containerd[1432]: time="2025-03-17T17:39:02.205305430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:02.208241 containerd[1432]: time="2025-03-17T17:39:02.205383068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:02.210035 containerd[1432]: time="2025-03-17T17:39:02.209746150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wvdn,Uid:4bcadc09-d993-4b6b-a06f-decd561e1fef,Namespace:kube-system,Attempt:4,} returns sandbox id \"bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a\"" Mar 17 17:39:02.213353 kubelet[2512]: E0317 17:39:02.211807 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:02.218712 containerd[1432]: time="2025-03-17T17:39:02.218658309Z" level=info msg="CreateContainer within sandbox \"bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:39:02.272377 kubelet[2512]: I0317 17:39:02.271833 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:02.272494 containerd[1432]: time="2025-03-17T17:39:02.272282059Z" level=info msg="CreateContainer within sandbox \"bf381f52382572130c4646164ecb70a80ec64698189d5242b6116c31b1c0488a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c006ec066334813415594d1412ff7310318f64d93e821806573447e8a22dd35e\"" Mar 17 17:39:02.272874 systemd[1]: Started cri-containerd-d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a.scope - libcontainer container d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a. Mar 17 17:39:02.276234 kubelet[2512]: E0317 17:39:02.274280 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:02.276234 kubelet[2512]: E0317 17:39:02.274419 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:02.276234 kubelet[2512]: E0317 17:39:02.274493 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:02.275242 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:37198.service - OpenSSH per-connection server daemon (10.0.0.1:37198). Mar 17 17:39:02.278867 containerd[1432]: time="2025-03-17T17:39:02.276544504Z" level=info msg="StartContainer for \"c006ec066334813415594d1412ff7310318f64d93e821806573447e8a22dd35e\"" Mar 17 17:39:02.321852 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:39:02.343460 systemd[1]: Started cri-containerd-c006ec066334813415594d1412ff7310318f64d93e821806573447e8a22dd35e.scope - libcontainer container c006ec066334813415594d1412ff7310318f64d93e821806573447e8a22dd35e. Mar 17 17:39:02.383336 containerd[1432]: time="2025-03-17T17:39:02.380867882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b59c58749-tz5hx,Uid:f65e465b-a83f-4cf0-bb51-23b66ac6541f,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a\"" Mar 17 17:39:02.386900 sshd[4806]: Accepted publickey for core from 10.0.0.1 port 37198 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:02.389654 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:02.404407 systemd-logind[1422]: New session 8 of user core. Mar 17 17:39:02.410806 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:39:02.413585 containerd[1432]: time="2025-03-17T17:39:02.413493200Z" level=info msg="StartContainer for \"c006ec066334813415594d1412ff7310318f64d93e821806573447e8a22dd35e\" returns successfully" Mar 17 17:39:02.478662 kernel: bpftool[4904]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:39:02.641427 systemd-networkd[1378]: vxlan.calico: Link UP Mar 17 17:39:02.641436 systemd-networkd[1378]: vxlan.calico: Gained carrier Mar 17 17:39:02.671657 sshd[4866]: Connection closed by 10.0.0.1 port 37198 Mar 17 17:39:02.672743 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:02.677844 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:39:02.680239 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:37198.service: Deactivated successfully. Mar 17 17:39:02.683431 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:39:02.684312 systemd-logind[1422]: Removed session 8. Mar 17 17:39:03.262134 containerd[1432]: time="2025-03-17T17:39:03.262061788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:03.262798 containerd[1432]: time="2025-03-17T17:39:03.262760490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 17 17:39:03.263483 containerd[1432]: time="2025-03-17T17:39:03.263443672Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:03.266305 containerd[1432]: time="2025-03-17T17:39:03.266270598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:03.267056 containerd[1432]: time="2025-03-17T17:39:03.267030619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.538997673s" Mar 17 17:39:03.267109 containerd[1432]: time="2025-03-17T17:39:03.267062258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 17 17:39:03.269502 containerd[1432]: time="2025-03-17T17:39:03.268271986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:39:03.272616 systemd-networkd[1378]: cali197c2fd2c68: Gained IPv6LL Mar 17 17:39:03.276641 kubelet[2512]: E0317 17:39:03.276596 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:03.283560 kubelet[2512]: E0317 17:39:03.283523 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:03.291293 kubelet[2512]: I0317 17:39:03.290144 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-smb5v" podStartSLOduration=23.290128337 podStartE2EDuration="23.290128337s" podCreationTimestamp="2025-03-17 17:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:02.288989687 +0000 UTC m=+28.412957346" watchObservedRunningTime="2025-03-17 17:39:03.290128337 +0000 UTC m=+29.414096076" Mar 17 17:39:03.296698 containerd[1432]: time="2025-03-17T17:39:03.296661126Z" level=info msg="CreateContainer within sandbox \"78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:39:03.302906 kubelet[2512]: I0317 17:39:03.302850 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4wvdn" podStartSLOduration=23.302831886 podStartE2EDuration="23.302831886s" podCreationTimestamp="2025-03-17 17:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:03.290520486 +0000 UTC m=+29.414488185" watchObservedRunningTime="2025-03-17 17:39:03.302831886 +0000 UTC m=+29.426799545" Mar 17 17:39:03.335642 containerd[1432]: time="2025-03-17T17:39:03.335544753Z" level=info msg="CreateContainer within sandbox \"78eabcb1cd7e3c7a6abcf717638405bda3964708d9d57eba02111b68d2e16ce4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3a30685f1360eb32d8b32c59a0f06324f186ac0c0f132566717432632567d11\"" Mar 17 17:39:03.336708 containerd[1432]: time="2025-03-17T17:39:03.336149777Z" level=info msg="StartContainer for \"f3a30685f1360eb32d8b32c59a0f06324f186ac0c0f132566717432632567d11\"" Mar 17 17:39:03.388792 systemd[1]: Started cri-containerd-f3a30685f1360eb32d8b32c59a0f06324f186ac0c0f132566717432632567d11.scope - libcontainer container f3a30685f1360eb32d8b32c59a0f06324f186ac0c0f132566717432632567d11. Mar 17 17:39:03.422070 containerd[1432]: time="2025-03-17T17:39:03.422022340Z" level=info msg="StartContainer for \"f3a30685f1360eb32d8b32c59a0f06324f186ac0c0f132566717432632567d11\" returns successfully" Mar 17 17:39:03.463731 systemd-networkd[1378]: cali967b90a0dd0: Gained IPv6LL Mar 17 17:39:03.527855 systemd-networkd[1378]: cali3453345cf78: Gained IPv6LL Mar 17 17:39:03.592117 systemd-networkd[1378]: cali960731f6e3f: Gained IPv6LL Mar 17 17:39:03.912754 systemd-networkd[1378]: calic52757b785f: Gained IPv6LL Mar 17 17:39:03.975741 systemd-networkd[1378]: cali212b5e08e92: Gained IPv6LL Mar 17 17:39:04.287976 kubelet[2512]: E0317 17:39:04.287864 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:04.287976 kubelet[2512]: E0317 17:39:04.287904 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:04.301734 kubelet[2512]: I0317 17:39:04.301661 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b59c58749-s4pv2" podStartSLOduration=15.759745383 podStartE2EDuration="17.301542539s" podCreationTimestamp="2025-03-17 17:38:47 +0000 UTC" firstStartedPulling="2025-03-17 17:39:01.72608892 +0000 UTC m=+27.850056579" lastFinishedPulling="2025-03-17 17:39:03.267886076 +0000 UTC m=+29.391853735" observedRunningTime="2025-03-17 17:39:04.300000177 +0000 UTC m=+30.423967876" watchObservedRunningTime="2025-03-17 17:39:04.301542539 +0000 UTC m=+30.425510198" Mar 17 17:39:04.487688 containerd[1432]: time="2025-03-17T17:39:04.487610302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:04.488448 containerd[1432]: time="2025-03-17T17:39:04.488401162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:39:04.489049 containerd[1432]: time="2025-03-17T17:39:04.489010427Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:04.490907 containerd[1432]: time="2025-03-17T17:39:04.490874100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:04.492312 containerd[1432]: time="2025-03-17T17:39:04.492282865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.223978439s" Mar 17 17:39:04.492360 containerd[1432]: time="2025-03-17T17:39:04.492321704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:39:04.494150 containerd[1432]: time="2025-03-17T17:39:04.493978142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:39:04.496586 containerd[1432]: time="2025-03-17T17:39:04.496554477Z" level=info msg="CreateContainer within sandbox \"05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:39:04.511073 containerd[1432]: time="2025-03-17T17:39:04.510980595Z" level=info msg="CreateContainer within sandbox \"05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"92718a34e65055a50e24ed5d5dfb9aa3dd08fb6e4f0bf7777b40b8178c04b3c2\"" Mar 17 17:39:04.513400 containerd[1432]: time="2025-03-17T17:39:04.511799294Z" level=info msg="StartContainer for \"92718a34e65055a50e24ed5d5dfb9aa3dd08fb6e4f0bf7777b40b8178c04b3c2\"" Mar 17 17:39:04.556844 systemd[1]: Started cri-containerd-92718a34e65055a50e24ed5d5dfb9aa3dd08fb6e4f0bf7777b40b8178c04b3c2.scope - libcontainer container 92718a34e65055a50e24ed5d5dfb9aa3dd08fb6e4f0bf7777b40b8178c04b3c2. Mar 17 17:39:04.587425 containerd[1432]: time="2025-03-17T17:39:04.587386074Z" level=info msg="StartContainer for \"92718a34e65055a50e24ed5d5dfb9aa3dd08fb6e4f0bf7777b40b8178c04b3c2\" returns successfully" Mar 17 17:39:04.679851 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Mar 17 17:39:05.291878 kubelet[2512]: I0317 17:39:05.291830 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:05.292227 kubelet[2512]: E0317 17:39:05.292126 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:05.841686 containerd[1432]: time="2025-03-17T17:39:05.841611480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:05.842263 containerd[1432]: time="2025-03-17T17:39:05.842214345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 17 17:39:05.843038 containerd[1432]: time="2025-03-17T17:39:05.843013726Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:05.845091 containerd[1432]: time="2025-03-17T17:39:05.845037917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:05.845856 containerd[1432]: time="2025-03-17T17:39:05.845823258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.351812677s" Mar 17 17:39:05.845929 containerd[1432]: time="2025-03-17T17:39:05.845858217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 17 17:39:05.855344 containerd[1432]: time="2025-03-17T17:39:05.855096033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:39:05.867578 containerd[1432]: time="2025-03-17T17:39:05.867533171Z" level=info msg="CreateContainer within sandbox \"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:39:05.910738 containerd[1432]: time="2025-03-17T17:39:05.910688444Z" level=info msg="CreateContainer within sandbox \"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\"" Mar 17 17:39:05.912148 containerd[1432]: time="2025-03-17T17:39:05.911433306Z" level=info msg="StartContainer for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\"" Mar 17 17:39:05.950806 systemd[1]: Started cri-containerd-886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f.scope - libcontainer container 886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f. Mar 17 17:39:05.991648 containerd[1432]: time="2025-03-17T17:39:05.990967536Z" level=info msg="StartContainer for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" returns successfully" Mar 17 17:39:06.209524 containerd[1432]: time="2025-03-17T17:39:06.209466002Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:06.209966 containerd[1432]: time="2025-03-17T17:39:06.209906871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:39:06.212472 containerd[1432]: time="2025-03-17T17:39:06.212436812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 357.29934ms" Mar 17 17:39:06.212472 containerd[1432]: time="2025-03-17T17:39:06.212475051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 17 17:39:06.214920 containerd[1432]: time="2025-03-17T17:39:06.213807140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:39:06.216048 containerd[1432]: time="2025-03-17T17:39:06.215989369Z" level=info msg="CreateContainer within sandbox \"d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:39:06.226451 containerd[1432]: time="2025-03-17T17:39:06.226300487Z" level=info msg="CreateContainer within sandbox \"d6e5edcd94993504eb506c1daf12e639b62dba3fdd0286f711b304e47328b71a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"45ee8f324625984604c63d0fa706e805f739172023c8a686dd6bba173d587316\"" Mar 17 17:39:06.227103 containerd[1432]: time="2025-03-17T17:39:06.227075069Z" level=info msg="StartContainer for \"45ee8f324625984604c63d0fa706e805f739172023c8a686dd6bba173d587316\"" Mar 17 17:39:06.256797 systemd[1]: Started cri-containerd-45ee8f324625984604c63d0fa706e805f739172023c8a686dd6bba173d587316.scope - libcontainer container 45ee8f324625984604c63d0fa706e805f739172023c8a686dd6bba173d587316. Mar 17 17:39:06.287437 containerd[1432]: time="2025-03-17T17:39:06.287395334Z" level=info msg="StartContainer for \"45ee8f324625984604c63d0fa706e805f739172023c8a686dd6bba173d587316\" returns successfully" Mar 17 17:39:06.325268 kubelet[2512]: I0317 17:39:06.325203 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b59c58749-tz5hx" podStartSLOduration=15.495082996 podStartE2EDuration="19.325183968s" podCreationTimestamp="2025-03-17 17:38:47 +0000 UTC" firstStartedPulling="2025-03-17 17:39:02.383145101 +0000 UTC m=+28.507112720" lastFinishedPulling="2025-03-17 17:39:06.213246033 +0000 UTC m=+32.337213692" observedRunningTime="2025-03-17 17:39:06.313113051 +0000 UTC m=+32.437080710" watchObservedRunningTime="2025-03-17 17:39:06.325183968 +0000 UTC m=+32.449151587" Mar 17 17:39:06.326459 kubelet[2512]: I0317 17:39:06.325587 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ff5ffdc75-plj5f" podStartSLOduration=15.534058669 podStartE2EDuration="19.325579518s" podCreationTimestamp="2025-03-17 17:38:47 +0000 UTC" firstStartedPulling="2025-03-17 17:39:02.06332063 +0000 UTC m=+28.187288289" lastFinishedPulling="2025-03-17 17:39:05.854841479 +0000 UTC m=+31.978809138" observedRunningTime="2025-03-17 17:39:06.325536279 +0000 UTC m=+32.449503938" watchObservedRunningTime="2025-03-17 17:39:06.325579518 +0000 UTC m=+32.449547177" Mar 17 17:39:07.304925 kubelet[2512]: I0317 17:39:07.304533 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:07.304925 kubelet[2512]: I0317 17:39:07.304548 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:07.342070 containerd[1432]: time="2025-03-17T17:39:07.342027138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:07.344691 containerd[1432]: time="2025-03-17T17:39:07.343573383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:39:07.345327 containerd[1432]: time="2025-03-17T17:39:07.345294944Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:07.349049 containerd[1432]: time="2025-03-17T17:39:07.347934244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:07.349049 containerd[1432]: time="2025-03-17T17:39:07.348680467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.134838128s" Mar 17 17:39:07.349049 containerd[1432]: time="2025-03-17T17:39:07.348706466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:39:07.351279 containerd[1432]: time="2025-03-17T17:39:07.351250489Z" level=info msg="CreateContainer within sandbox \"05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:39:07.375346 containerd[1432]: time="2025-03-17T17:39:07.375293303Z" level=info msg="CreateContainer within sandbox \"05ced730b3d859de7c6bdf8ab78142a9b0bdf4c70a95416ed9d01bb8a364ab24\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0676c2022579fa182057263443a502849cf42105e5ddfc3e25db51f4d1509e8d\"" Mar 17 17:39:07.376337 containerd[1432]: time="2025-03-17T17:39:07.376309360Z" level=info msg="StartContainer for \"0676c2022579fa182057263443a502849cf42105e5ddfc3e25db51f4d1509e8d\"" Mar 17 17:39:07.403798 systemd[1]: Started cri-containerd-0676c2022579fa182057263443a502849cf42105e5ddfc3e25db51f4d1509e8d.scope - libcontainer container 0676c2022579fa182057263443a502849cf42105e5ddfc3e25db51f4d1509e8d. Mar 17 17:39:07.430680 containerd[1432]: time="2025-03-17T17:39:07.430606128Z" level=info msg="StartContainer for \"0676c2022579fa182057263443a502849cf42105e5ddfc3e25db51f4d1509e8d\" returns successfully" Mar 17 17:39:07.689541 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:36406.service - OpenSSH per-connection server daemon (10.0.0.1:36406). Mar 17 17:39:07.760585 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 36406 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:07.761133 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:07.765298 systemd-logind[1422]: New session 9 of user core. Mar 17 17:39:07.771796 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:39:07.979141 sshd[5217]: Connection closed by 10.0.0.1 port 36406 Mar 17 17:39:07.979410 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:07.984214 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:36406.service: Deactivated successfully. Mar 17 17:39:07.985982 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:39:07.987241 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:39:07.988099 systemd-logind[1422]: Removed session 9. Mar 17 17:39:08.041169 kubelet[2512]: I0317 17:39:08.041115 2512 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:39:08.043469 kubelet[2512]: I0317 17:39:08.043436 2512 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:39:08.168733 kubelet[2512]: I0317 17:39:08.168686 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:08.169134 kubelet[2512]: E0317 17:39:08.169100 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:08.313007 kubelet[2512]: E0317 17:39:08.312677 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:12.990510 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:44830.service - OpenSSH per-connection server daemon (10.0.0.1:44830). Mar 17 17:39:13.040930 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 44830 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:13.042095 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:13.045712 systemd-logind[1422]: New session 10 of user core. Mar 17 17:39:13.053759 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:39:13.232966 sshd[5295]: Connection closed by 10.0.0.1 port 44830 Mar 17 17:39:13.233672 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:13.242125 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:44830.service: Deactivated successfully. Mar 17 17:39:13.244115 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:39:13.245495 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:39:13.247782 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:44840.service - OpenSSH per-connection server daemon (10.0.0.1:44840). Mar 17 17:39:13.251342 systemd-logind[1422]: Removed session 10. Mar 17 17:39:13.298184 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 44840 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:13.299708 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:13.304285 systemd-logind[1422]: New session 11 of user core. Mar 17 17:39:13.311812 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:39:13.499749 sshd[5312]: Connection closed by 10.0.0.1 port 44840 Mar 17 17:39:13.501292 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:13.511216 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:44840.service: Deactivated successfully. Mar 17 17:39:13.516168 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:39:13.520142 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:39:13.531655 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:44856.service - OpenSSH per-connection server daemon (10.0.0.1:44856). Mar 17 17:39:13.533004 systemd-logind[1422]: Removed session 11. Mar 17 17:39:13.572247 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 44856 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:13.573545 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:13.577972 systemd-logind[1422]: New session 12 of user core. Mar 17 17:39:13.592787 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:39:13.738101 sshd[5329]: Connection closed by 10.0.0.1 port 44856 Mar 17 17:39:13.738457 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:13.741286 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:44856.service: Deactivated successfully. Mar 17 17:39:13.743854 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:39:13.745697 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:39:13.746640 systemd-logind[1422]: Removed session 12. Mar 17 17:39:16.064881 kubelet[2512]: I0317 17:39:16.064829 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:16.088759 kubelet[2512]: I0317 17:39:16.088690 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wbfxc" podStartSLOduration=23.567645581 podStartE2EDuration="29.088672147s" podCreationTimestamp="2025-03-17 17:38:47 +0000 UTC" firstStartedPulling="2025-03-17 17:39:01.828666278 +0000 UTC m=+27.952633897" lastFinishedPulling="2025-03-17 17:39:07.349692804 +0000 UTC m=+33.473660463" observedRunningTime="2025-03-17 17:39:08.326895782 +0000 UTC m=+34.450863441" watchObservedRunningTime="2025-03-17 17:39:16.088672147 +0000 UTC m=+42.212639766" Mar 17 17:39:18.749875 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:44872.service - OpenSSH per-connection server daemon (10.0.0.1:44872). Mar 17 17:39:18.785492 kubelet[2512]: I0317 17:39:18.785443 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:18.821564 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 44872 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:18.824920 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:18.831870 systemd-logind[1422]: New session 13 of user core. Mar 17 17:39:18.841177 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:39:18.865611 systemd[1]: run-containerd-runc-k8s.io-886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f-runc.pfu73g.mount: Deactivated successfully. Mar 17 17:39:19.041737 sshd[5370]: Connection closed by 10.0.0.1 port 44872 Mar 17 17:39:19.042186 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:19.057140 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:44872.service: Deactivated successfully. Mar 17 17:39:19.058959 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:39:19.060507 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:39:19.075904 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:44878.service - OpenSSH per-connection server daemon (10.0.0.1:44878). Mar 17 17:39:19.076976 systemd-logind[1422]: Removed session 13. Mar 17 17:39:19.112834 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 44878 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:19.114315 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:19.118391 systemd-logind[1422]: New session 14 of user core. Mar 17 17:39:19.128782 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:39:19.360059 sshd[5404]: Connection closed by 10.0.0.1 port 44878 Mar 17 17:39:19.360456 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:19.372694 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:44878.service: Deactivated successfully. Mar 17 17:39:19.374473 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:39:19.375826 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:39:19.385208 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:44888.service - OpenSSH per-connection server daemon (10.0.0.1:44888). Mar 17 17:39:19.386074 systemd-logind[1422]: Removed session 14. Mar 17 17:39:19.432580 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 44888 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:19.433105 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:19.436863 systemd-logind[1422]: New session 15 of user core. Mar 17 17:39:19.446797 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:39:21.110988 sshd[5417]: Connection closed by 10.0.0.1 port 44888 Mar 17 17:39:21.111831 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:21.120944 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:44888.service: Deactivated successfully. Mar 17 17:39:21.123791 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:39:21.125824 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:39:21.128748 systemd-logind[1422]: Removed session 15. Mar 17 17:39:21.136004 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:44890.service - OpenSSH per-connection server daemon (10.0.0.1:44890). Mar 17 17:39:21.188348 sshd[5436]: Accepted publickey for core from 10.0.0.1 port 44890 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:21.189426 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:21.194324 systemd-logind[1422]: New session 16 of user core. Mar 17 17:39:21.206806 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:39:21.532418 sshd[5438]: Connection closed by 10.0.0.1 port 44890 Mar 17 17:39:21.533617 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:21.545850 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:44890.service: Deactivated successfully. Mar 17 17:39:21.547596 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:39:21.549115 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:39:21.554863 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:44900.service - OpenSSH per-connection server daemon (10.0.0.1:44900). Mar 17 17:39:21.555957 systemd-logind[1422]: Removed session 16. Mar 17 17:39:21.589458 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 44900 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:21.590587 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:21.594278 systemd-logind[1422]: New session 17 of user core. Mar 17 17:39:21.607766 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:39:21.755740 sshd[5451]: Connection closed by 10.0.0.1 port 44900 Mar 17 17:39:21.756092 sshd-session[5449]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:21.759216 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:44900.service: Deactivated successfully. Mar 17 17:39:21.762385 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:39:21.763246 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:39:21.764692 systemd-logind[1422]: Removed session 17. Mar 17 17:39:24.249175 kubelet[2512]: I0317 17:39:24.248942 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:26.772132 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:42928.service - OpenSSH per-connection server daemon (10.0.0.1:42928). Mar 17 17:39:26.822510 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 42928 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:26.823961 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:26.828534 systemd-logind[1422]: New session 18 of user core. Mar 17 17:39:26.837831 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:39:26.976752 sshd[5481]: Connection closed by 10.0.0.1 port 42928 Mar 17 17:39:26.977174 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:26.980528 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:42928.service: Deactivated successfully. Mar 17 17:39:26.982994 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:39:26.984628 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:39:26.986312 systemd-logind[1422]: Removed session 18. Mar 17 17:39:31.992270 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:42944.service - OpenSSH per-connection server daemon (10.0.0.1:42944). Mar 17 17:39:32.065718 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 42944 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:32.067828 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:32.072451 systemd-logind[1422]: New session 19 of user core. Mar 17 17:39:32.078825 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:39:32.221661 sshd[5496]: Connection closed by 10.0.0.1 port 42944 Mar 17 17:39:32.222028 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:32.225444 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:42944.service: Deactivated successfully. Mar 17 17:39:32.227416 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:39:32.228465 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:39:32.230261 systemd-logind[1422]: Removed session 19. Mar 17 17:39:33.967450 containerd[1432]: time="2025-03-17T17:39:33.967399098Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:39:33.967847 containerd[1432]: time="2025-03-17T17:39:33.967519856Z" level=info msg="TearDown network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" successfully" Mar 17 17:39:33.967847 containerd[1432]: time="2025-03-17T17:39:33.967531336Z" level=info msg="StopPodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" returns successfully" Mar 17 17:39:33.968331 containerd[1432]: time="2025-03-17T17:39:33.968304486Z" level=info msg="RemovePodSandbox for \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:39:33.968376 containerd[1432]: time="2025-03-17T17:39:33.968346765Z" level=info msg="Forcibly stopping sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\"" Mar 17 17:39:33.968448 containerd[1432]: time="2025-03-17T17:39:33.968432684Z" level=info msg="TearDown network for sandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" successfully" Mar 17 17:39:33.976489 containerd[1432]: time="2025-03-17T17:39:33.976422937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:33.976612 containerd[1432]: time="2025-03-17T17:39:33.976522696Z" level=info msg="RemovePodSandbox \"0866014ada6b7a4b9d611e947c68f28791ae096440e82b52126f78ccd55857f8\" returns successfully" Mar 17 17:39:33.977058 containerd[1432]: time="2025-03-17T17:39:33.977026329Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" Mar 17 17:39:33.977153 containerd[1432]: time="2025-03-17T17:39:33.977136928Z" level=info msg="TearDown network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" successfully" Mar 17 17:39:33.977185 containerd[1432]: time="2025-03-17T17:39:33.977152447Z" level=info msg="StopPodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" returns successfully" Mar 17 17:39:33.977527 containerd[1432]: time="2025-03-17T17:39:33.977509083Z" level=info msg="RemovePodSandbox for \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" Mar 17 17:39:33.977563 containerd[1432]: time="2025-03-17T17:39:33.977533682Z" level=info msg="Forcibly stopping sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\"" Mar 17 17:39:33.977615 containerd[1432]: time="2025-03-17T17:39:33.977596361Z" level=info msg="TearDown network for sandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" successfully" Mar 17 17:39:33.980326 containerd[1432]: time="2025-03-17T17:39:33.980255446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:33.980366 containerd[1432]: time="2025-03-17T17:39:33.980351965Z" level=info msg="RemovePodSandbox \"80dd8b50366243136540dbb65c7e865c4a31f998a04d42fccb8ec8c2a672bc12\" returns successfully" Mar 17 17:39:33.980722 containerd[1432]: time="2025-03-17T17:39:33.980693760Z" level=info msg="StopPodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\"" Mar 17 17:39:33.980808 containerd[1432]: time="2025-03-17T17:39:33.980792279Z" level=info msg="TearDown network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" successfully" Mar 17 17:39:33.980834 containerd[1432]: time="2025-03-17T17:39:33.980807518Z" level=info msg="StopPodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" returns successfully" Mar 17 17:39:33.981080 containerd[1432]: time="2025-03-17T17:39:33.981033075Z" level=info msg="RemovePodSandbox for \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\"" Mar 17 17:39:33.981122 containerd[1432]: time="2025-03-17T17:39:33.981088515Z" level=info msg="Forcibly stopping sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\"" Mar 17 17:39:33.981174 containerd[1432]: time="2025-03-17T17:39:33.981159874Z" level=info msg="TearDown network for sandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" successfully" Mar 17 17:39:33.983740 containerd[1432]: time="2025-03-17T17:39:33.983700240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:33.983799 containerd[1432]: time="2025-03-17T17:39:33.983759479Z" level=info msg="RemovePodSandbox \"1843a704a94ce7ea01c0fb514de82298d35bee77d5a89c6b7b4216855a196447\" returns successfully" Mar 17 17:39:33.984162 containerd[1432]: time="2025-03-17T17:39:33.984129474Z" level=info msg="StopPodSandbox for \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\"" Mar 17 17:39:33.984235 containerd[1432]: time="2025-03-17T17:39:33.984219433Z" level=info msg="TearDown network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\" successfully" Mar 17 17:39:33.984271 containerd[1432]: time="2025-03-17T17:39:33.984233433Z" level=info msg="StopPodSandbox for \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\" returns successfully" Mar 17 17:39:33.984702 containerd[1432]: time="2025-03-17T17:39:33.984682227Z" level=info msg="RemovePodSandbox for \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\"" Mar 17 17:39:33.984734 containerd[1432]: time="2025-03-17T17:39:33.984708706Z" level=info msg="Forcibly stopping sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\"" Mar 17 17:39:33.984786 containerd[1432]: time="2025-03-17T17:39:33.984768385Z" level=info msg="TearDown network for sandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\" successfully" Mar 17 17:39:33.996909 containerd[1432]: time="2025-03-17T17:39:33.996659106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:33.996909 containerd[1432]: time="2025-03-17T17:39:33.996751065Z" level=info msg="RemovePodSandbox \"fe4f61c7c0b7d6351c1040af4fbc6b77929b3e8f98e9cd886b74b2bb3cbdf410\" returns successfully" Mar 17 17:39:33.997414 containerd[1432]: time="2025-03-17T17:39:33.997227018Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:39:33.997414 containerd[1432]: time="2025-03-17T17:39:33.997330457Z" level=info msg="TearDown network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" successfully" Mar 17 17:39:33.997414 containerd[1432]: time="2025-03-17T17:39:33.997341217Z" level=info msg="StopPodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" returns successfully" Mar 17 17:39:33.997648 containerd[1432]: time="2025-03-17T17:39:33.997591374Z" level=info msg="RemovePodSandbox for \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:39:33.997697 containerd[1432]: time="2025-03-17T17:39:33.997649853Z" level=info msg="Forcibly stopping sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\"" Mar 17 17:39:33.997738 containerd[1432]: time="2025-03-17T17:39:33.997717212Z" level=info msg="TearDown network for sandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" successfully" Mar 17 17:39:34.005571 containerd[1432]: time="2025-03-17T17:39:34.005522428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.005718 containerd[1432]: time="2025-03-17T17:39:34.005591747Z" level=info msg="RemovePodSandbox \"562ec3be4be8d4aa968edb0c9d84012d3ca8e35a05250906e62b1d0146e15430\" returns successfully" Mar 17 17:39:34.006715 containerd[1432]: time="2025-03-17T17:39:34.006443816Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" Mar 17 17:39:34.006715 containerd[1432]: time="2025-03-17T17:39:34.006541574Z" level=info msg="TearDown network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" successfully" Mar 17 17:39:34.006715 containerd[1432]: time="2025-03-17T17:39:34.006554134Z" level=info msg="StopPodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" returns successfully" Mar 17 17:39:34.006870 containerd[1432]: time="2025-03-17T17:39:34.006841770Z" level=info msg="RemovePodSandbox for \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" Mar 17 17:39:34.006900 containerd[1432]: time="2025-03-17T17:39:34.006871770Z" level=info msg="Forcibly stopping sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\"" Mar 17 17:39:34.006991 containerd[1432]: time="2025-03-17T17:39:34.006948409Z" level=info msg="TearDown network for sandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" successfully" Mar 17 17:39:34.009945 containerd[1432]: time="2025-03-17T17:39:34.009900050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.010037 containerd[1432]: time="2025-03-17T17:39:34.009970529Z" level=info msg="RemovePodSandbox \"bab7a6e50abc0fd4a5d9b0e8fb1cec22d650891383438f16303d33456b94c975\" returns successfully" Mar 17 17:39:34.010377 containerd[1432]: time="2025-03-17T17:39:34.010337804Z" level=info msg="StopPodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\"" Mar 17 17:39:34.010455 containerd[1432]: time="2025-03-17T17:39:34.010437763Z" level=info msg="TearDown network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" successfully" Mar 17 17:39:34.010455 containerd[1432]: time="2025-03-17T17:39:34.010452683Z" level=info msg="StopPodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" returns successfully" Mar 17 17:39:34.010967 containerd[1432]: time="2025-03-17T17:39:34.010695479Z" level=info msg="RemovePodSandbox for \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\"" Mar 17 17:39:34.010967 containerd[1432]: time="2025-03-17T17:39:34.010718999Z" level=info msg="Forcibly stopping sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\"" Mar 17 17:39:34.010967 containerd[1432]: time="2025-03-17T17:39:34.010785838Z" level=info msg="TearDown network for sandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" successfully" Mar 17 17:39:34.013128 containerd[1432]: time="2025-03-17T17:39:34.013081448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.013195 containerd[1432]: time="2025-03-17T17:39:34.013146287Z" level=info msg="RemovePodSandbox \"367ef6b05ce0e602fb21c741786d1ab2f6e305cc5da5b51696d5a83fa4559a3e\" returns successfully" Mar 17 17:39:34.013842 containerd[1432]: time="2025-03-17T17:39:34.013644040Z" level=info msg="StopPodSandbox for \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\"" Mar 17 17:39:34.013842 containerd[1432]: time="2025-03-17T17:39:34.013742199Z" level=info msg="TearDown network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\" successfully" Mar 17 17:39:34.013842 containerd[1432]: time="2025-03-17T17:39:34.013753479Z" level=info msg="StopPodSandbox for \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\" returns successfully" Mar 17 17:39:34.014132 containerd[1432]: time="2025-03-17T17:39:34.013983276Z" level=info msg="RemovePodSandbox for \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\"" Mar 17 17:39:34.014132 containerd[1432]: time="2025-03-17T17:39:34.014013235Z" level=info msg="Forcibly stopping sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\"" Mar 17 17:39:34.014132 containerd[1432]: time="2025-03-17T17:39:34.014087314Z" level=info msg="TearDown network for sandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\" successfully" Mar 17 17:39:34.016394 containerd[1432]: time="2025-03-17T17:39:34.016350524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.016478 containerd[1432]: time="2025-03-17T17:39:34.016414683Z" level=info msg="RemovePodSandbox \"97c4e96cc24c846fbaaa465e55a4a9e74c2016c394acdc3c6dcbfe9c74930b54\" returns successfully" Mar 17 17:39:34.017109 containerd[1432]: time="2025-03-17T17:39:34.017011916Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:39:34.017171 containerd[1432]: time="2025-03-17T17:39:34.017109274Z" level=info msg="TearDown network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" successfully" Mar 17 17:39:34.017171 containerd[1432]: time="2025-03-17T17:39:34.017120794Z" level=info msg="StopPodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" returns successfully" Mar 17 17:39:34.017800 containerd[1432]: time="2025-03-17T17:39:34.017549068Z" level=info msg="RemovePodSandbox for \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:39:34.017800 containerd[1432]: time="2025-03-17T17:39:34.017576548Z" level=info msg="Forcibly stopping sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\"" Mar 17 17:39:34.019127 containerd[1432]: time="2025-03-17T17:39:34.017973543Z" level=info msg="TearDown network for sandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" successfully" Mar 17 17:39:34.020610 containerd[1432]: time="2025-03-17T17:39:34.020565708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.020747 containerd[1432]: time="2025-03-17T17:39:34.020710626Z" level=info msg="RemovePodSandbox \"90539ef26d6cc675ae4afe6c1c785cb1cb167a9da20ee6b33ad1547d3751a8d3\" returns successfully" Mar 17 17:39:34.021171 containerd[1432]: time="2025-03-17T17:39:34.021145501Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" Mar 17 17:39:34.021268 containerd[1432]: time="2025-03-17T17:39:34.021248459Z" level=info msg="TearDown network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" successfully" Mar 17 17:39:34.021268 containerd[1432]: time="2025-03-17T17:39:34.021264539Z" level=info msg="StopPodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" returns successfully" Mar 17 17:39:34.022955 containerd[1432]: time="2025-03-17T17:39:34.021630534Z" level=info msg="RemovePodSandbox for \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" Mar 17 17:39:34.022955 containerd[1432]: time="2025-03-17T17:39:34.021712253Z" level=info msg="Forcibly stopping sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\"" Mar 17 17:39:34.022955 containerd[1432]: time="2025-03-17T17:39:34.021809012Z" level=info msg="TearDown network for sandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" successfully" Mar 17 17:39:34.024339 containerd[1432]: time="2025-03-17T17:39:34.024302899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.024455 containerd[1432]: time="2025-03-17T17:39:34.024438297Z" level=info msg="RemovePodSandbox \"fd339be2fe20d44223147d2bb81d73e835f8b9f3a64b8f1e524760217a948724\" returns successfully" Mar 17 17:39:34.024859 containerd[1432]: time="2025-03-17T17:39:34.024835852Z" level=info msg="StopPodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\"" Mar 17 17:39:34.025118 containerd[1432]: time="2025-03-17T17:39:34.025097848Z" level=info msg="TearDown network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" successfully" Mar 17 17:39:34.025202 containerd[1432]: time="2025-03-17T17:39:34.025185607Z" level=info msg="StopPodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" returns successfully" Mar 17 17:39:34.025587 containerd[1432]: time="2025-03-17T17:39:34.025566762Z" level=info msg="RemovePodSandbox for \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\"" Mar 17 17:39:34.025710 containerd[1432]: time="2025-03-17T17:39:34.025689080Z" level=info msg="Forcibly stopping sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\"" Mar 17 17:39:34.025857 containerd[1432]: time="2025-03-17T17:39:34.025839438Z" level=info msg="TearDown network for sandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" successfully" Mar 17 17:39:34.028594 containerd[1432]: time="2025-03-17T17:39:34.028559642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.028761 containerd[1432]: time="2025-03-17T17:39:34.028739520Z" level=info msg="RemovePodSandbox \"f82d66f76ee7e61e29a06e84278133837a8bb0dbe75c06df61b5f2d7c3c0f477\" returns successfully" Mar 17 17:39:34.029248 containerd[1432]: time="2025-03-17T17:39:34.029223114Z" level=info msg="StopPodSandbox for \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\"" Mar 17 17:39:34.029457 containerd[1432]: time="2025-03-17T17:39:34.029415991Z" level=info msg="TearDown network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\" successfully" Mar 17 17:39:34.029457 containerd[1432]: time="2025-03-17T17:39:34.029433831Z" level=info msg="StopPodSandbox for \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\" returns successfully" Mar 17 17:39:34.029927 containerd[1432]: time="2025-03-17T17:39:34.029842665Z" level=info msg="RemovePodSandbox for \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\"" Mar 17 17:39:34.029927 containerd[1432]: time="2025-03-17T17:39:34.029874745Z" level=info msg="Forcibly stopping sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\"" Mar 17 17:39:34.030070 containerd[1432]: time="2025-03-17T17:39:34.029954544Z" level=info msg="TearDown network for sandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\" successfully" Mar 17 17:39:34.032359 containerd[1432]: time="2025-03-17T17:39:34.032326792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.032416 containerd[1432]: time="2025-03-17T17:39:34.032388472Z" level=info msg="RemovePodSandbox \"20a33d25f30927eda0c005930291ebaea388da3c760fc0cc3de0c018d42ef761\" returns successfully" Mar 17 17:39:34.032762 containerd[1432]: time="2025-03-17T17:39:34.032738507Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:39:34.033217 containerd[1432]: time="2025-03-17T17:39:34.033027103Z" level=info msg="TearDown network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" successfully" Mar 17 17:39:34.033217 containerd[1432]: time="2025-03-17T17:39:34.033045023Z" level=info msg="StopPodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" returns successfully" Mar 17 17:39:34.033375 containerd[1432]: time="2025-03-17T17:39:34.033333539Z" level=info msg="RemovePodSandbox for \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:39:34.033375 containerd[1432]: time="2025-03-17T17:39:34.033361259Z" level=info msg="Forcibly stopping sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\"" Mar 17 17:39:34.033522 containerd[1432]: time="2025-03-17T17:39:34.033433218Z" level=info msg="TearDown network for sandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" successfully" Mar 17 17:39:34.035719 containerd[1432]: time="2025-03-17T17:39:34.035684548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.035784 containerd[1432]: time="2025-03-17T17:39:34.035745187Z" level=info msg="RemovePodSandbox \"c2a9af0509455d99dcafe1d8109b930c05deed9e54eb5d1aa1f8f219acef6711\" returns successfully" Mar 17 17:39:34.036628 containerd[1432]: time="2025-03-17T17:39:34.036406818Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" Mar 17 17:39:34.039114 containerd[1432]: time="2025-03-17T17:39:34.038801386Z" level=info msg="TearDown network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" successfully" Mar 17 17:39:34.039114 containerd[1432]: time="2025-03-17T17:39:34.038833906Z" level=info msg="StopPodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" returns successfully" Mar 17 17:39:34.040572 containerd[1432]: time="2025-03-17T17:39:34.039218821Z" level=info msg="RemovePodSandbox for \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" Mar 17 17:39:34.040572 containerd[1432]: time="2025-03-17T17:39:34.039254860Z" level=info msg="Forcibly stopping sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\"" Mar 17 17:39:34.040572 containerd[1432]: time="2025-03-17T17:39:34.039348979Z" level=info msg="TearDown network for sandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" successfully" Mar 17 17:39:34.042900 containerd[1432]: time="2025-03-17T17:39:34.042855013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.042972 containerd[1432]: time="2025-03-17T17:39:34.042922012Z" level=info msg="RemovePodSandbox \"22f128e05bdac14baad82424ae2ee21815d51e389e7f1c6d28d9a69ad09ca96f\" returns successfully" Mar 17 17:39:34.043352 containerd[1432]: time="2025-03-17T17:39:34.043317527Z" level=info msg="StopPodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\"" Mar 17 17:39:34.043443 containerd[1432]: time="2025-03-17T17:39:34.043410605Z" level=info msg="TearDown network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" successfully" Mar 17 17:39:34.043443 containerd[1432]: time="2025-03-17T17:39:34.043425525Z" level=info msg="StopPodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" returns successfully" Mar 17 17:39:34.043841 containerd[1432]: time="2025-03-17T17:39:34.043813200Z" level=info msg="RemovePodSandbox for \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\"" Mar 17 17:39:34.044649 containerd[1432]: time="2025-03-17T17:39:34.043926198Z" level=info msg="Forcibly stopping sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\"" Mar 17 17:39:34.044649 containerd[1432]: time="2025-03-17T17:39:34.043994718Z" level=info msg="TearDown network for sandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" successfully" Mar 17 17:39:34.046689 containerd[1432]: time="2025-03-17T17:39:34.046655042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.046767 containerd[1432]: time="2025-03-17T17:39:34.046717641Z" level=info msg="RemovePodSandbox \"db2539f2624d96ec53349cb192ccd9ad9c5c9c88876b4c7b9e40ed756bb6220f\" returns successfully" Mar 17 17:39:34.047140 containerd[1432]: time="2025-03-17T17:39:34.047111996Z" level=info msg="StopPodSandbox for \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\"" Mar 17 17:39:34.047395 containerd[1432]: time="2025-03-17T17:39:34.047292874Z" level=info msg="TearDown network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\" successfully" Mar 17 17:39:34.047395 containerd[1432]: time="2025-03-17T17:39:34.047310394Z" level=info msg="StopPodSandbox for \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\" returns successfully" Mar 17 17:39:34.047631 containerd[1432]: time="2025-03-17T17:39:34.047596070Z" level=info msg="RemovePodSandbox for \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\"" Mar 17 17:39:34.047733 containerd[1432]: time="2025-03-17T17:39:34.047684829Z" level=info msg="Forcibly stopping sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\"" Mar 17 17:39:34.047811 containerd[1432]: time="2025-03-17T17:39:34.047795267Z" level=info msg="TearDown network for sandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\" successfully" Mar 17 17:39:34.050108 containerd[1432]: time="2025-03-17T17:39:34.050066317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.050179 containerd[1432]: time="2025-03-17T17:39:34.050128036Z" level=info msg="RemovePodSandbox \"0e87d8a12baf1edfed1d3f1d2d660ebcc514fa7c454aae3378ad191a05643ffc\" returns successfully" Mar 17 17:39:34.050521 containerd[1432]: time="2025-03-17T17:39:34.050489991Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:39:34.050609 containerd[1432]: time="2025-03-17T17:39:34.050591270Z" level=info msg="TearDown network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" successfully" Mar 17 17:39:34.050609 containerd[1432]: time="2025-03-17T17:39:34.050606790Z" level=info msg="StopPodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" returns successfully" Mar 17 17:39:34.052119 containerd[1432]: time="2025-03-17T17:39:34.050894986Z" level=info msg="RemovePodSandbox for \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:39:34.052119 containerd[1432]: time="2025-03-17T17:39:34.050923626Z" level=info msg="Forcibly stopping sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\"" Mar 17 17:39:34.052119 containerd[1432]: time="2025-03-17T17:39:34.050984465Z" level=info msg="TearDown network for sandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" successfully" Mar 17 17:39:34.053399 containerd[1432]: time="2025-03-17T17:39:34.053363953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.053514 containerd[1432]: time="2025-03-17T17:39:34.053497511Z" level=info msg="RemovePodSandbox \"e66de716f912fe3693fb2985c4d2f2ed4f4201fd40c5d2ac4fc2ed1c117823ca\" returns successfully" Mar 17 17:39:34.053925 containerd[1432]: time="2025-03-17T17:39:34.053897426Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" Mar 17 17:39:34.054018 containerd[1432]: time="2025-03-17T17:39:34.054000985Z" level=info msg="TearDown network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" successfully" Mar 17 17:39:34.054018 containerd[1432]: time="2025-03-17T17:39:34.054015025Z" level=info msg="StopPodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" returns successfully" Mar 17 17:39:34.054328 containerd[1432]: time="2025-03-17T17:39:34.054303221Z" level=info msg="RemovePodSandbox for \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" Mar 17 17:39:34.054363 containerd[1432]: time="2025-03-17T17:39:34.054337220Z" level=info msg="Forcibly stopping sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\"" Mar 17 17:39:34.054425 containerd[1432]: time="2025-03-17T17:39:34.054411099Z" level=info msg="TearDown network for sandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" successfully" Mar 17 17:39:34.056761 containerd[1432]: time="2025-03-17T17:39:34.056715989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.056816 containerd[1432]: time="2025-03-17T17:39:34.056781028Z" level=info msg="RemovePodSandbox \"832fff688efea2340e95fc036e226b1efcfbd50af95bbef1dc510d9d0a375e5e\" returns successfully" Mar 17 17:39:34.057229 containerd[1432]: time="2025-03-17T17:39:34.057200542Z" level=info msg="StopPodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\"" Mar 17 17:39:34.057324 containerd[1432]: time="2025-03-17T17:39:34.057308861Z" level=info msg="TearDown network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" successfully" Mar 17 17:39:34.057349 containerd[1432]: time="2025-03-17T17:39:34.057325181Z" level=info msg="StopPodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" returns successfully" Mar 17 17:39:34.057609 containerd[1432]: time="2025-03-17T17:39:34.057586297Z" level=info msg="RemovePodSandbox for \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\"" Mar 17 17:39:34.057662 containerd[1432]: time="2025-03-17T17:39:34.057616537Z" level=info msg="Forcibly stopping sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\"" Mar 17 17:39:34.057724 containerd[1432]: time="2025-03-17T17:39:34.057705896Z" level=info msg="TearDown network for sandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" successfully" Mar 17 17:39:34.060042 containerd[1432]: time="2025-03-17T17:39:34.060008625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.060108 containerd[1432]: time="2025-03-17T17:39:34.060078944Z" level=info msg="RemovePodSandbox \"28a26fc3dfe1bad07607f76d0cdbc436f31c477055a91694b874359afbed6e5c\" returns successfully" Mar 17 17:39:34.060599 containerd[1432]: time="2025-03-17T17:39:34.060574058Z" level=info msg="StopPodSandbox for \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\"" Mar 17 17:39:34.060691 containerd[1432]: time="2025-03-17T17:39:34.060675536Z" level=info msg="TearDown network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\" successfully" Mar 17 17:39:34.060691 containerd[1432]: time="2025-03-17T17:39:34.060690016Z" level=info msg="StopPodSandbox for \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\" returns successfully" Mar 17 17:39:34.061025 containerd[1432]: time="2025-03-17T17:39:34.060998892Z" level=info msg="RemovePodSandbox for \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\"" Mar 17 17:39:34.061309 containerd[1432]: time="2025-03-17T17:39:34.061124730Z" level=info msg="Forcibly stopping sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\"" Mar 17 17:39:34.061309 containerd[1432]: time="2025-03-17T17:39:34.061196289Z" level=info msg="TearDown network for sandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\" successfully" Mar 17 17:39:34.063732 containerd[1432]: time="2025-03-17T17:39:34.063694696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.063934 containerd[1432]: time="2025-03-17T17:39:34.063842174Z" level=info msg="RemovePodSandbox \"df1ca33a4fcf7aed4c61ca5c747db65fbdf925fee3803b44287b013c02b09adc\" returns successfully" Mar 17 17:39:34.064227 containerd[1432]: time="2025-03-17T17:39:34.064201489Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:39:34.064319 containerd[1432]: time="2025-03-17T17:39:34.064305128Z" level=info msg="TearDown network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" successfully" Mar 17 17:39:34.064435 containerd[1432]: time="2025-03-17T17:39:34.064318768Z" level=info msg="StopPodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" returns successfully" Mar 17 17:39:34.064601 containerd[1432]: time="2025-03-17T17:39:34.064575525Z" level=info msg="RemovePodSandbox for \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:39:34.064645 containerd[1432]: time="2025-03-17T17:39:34.064607644Z" level=info msg="Forcibly stopping sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\"" Mar 17 17:39:34.064710 containerd[1432]: time="2025-03-17T17:39:34.064693563Z" level=info msg="TearDown network for sandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" successfully" Mar 17 17:39:34.066943 containerd[1432]: time="2025-03-17T17:39:34.066901174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.066990 containerd[1432]: time="2025-03-17T17:39:34.066962413Z" level=info msg="RemovePodSandbox \"024db13e9c1a139f8666e189e3c275ebf7d08134a87b10927729d2e804792caf\" returns successfully" Mar 17 17:39:34.067297 containerd[1432]: time="2025-03-17T17:39:34.067271089Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" Mar 17 17:39:34.067380 containerd[1432]: time="2025-03-17T17:39:34.067364888Z" level=info msg="TearDown network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" successfully" Mar 17 17:39:34.067412 containerd[1432]: time="2025-03-17T17:39:34.067380007Z" level=info msg="StopPodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" returns successfully" Mar 17 17:39:34.067673 containerd[1432]: time="2025-03-17T17:39:34.067647684Z" level=info msg="RemovePodSandbox for \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" Mar 17 17:39:34.068401 containerd[1432]: time="2025-03-17T17:39:34.067754002Z" level=info msg="Forcibly stopping sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\"" Mar 17 17:39:34.068401 containerd[1432]: time="2025-03-17T17:39:34.067826081Z" level=info msg="TearDown network for sandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" successfully" Mar 17 17:39:34.070276 containerd[1432]: time="2025-03-17T17:39:34.070235049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.070326 containerd[1432]: time="2025-03-17T17:39:34.070301569Z" level=info msg="RemovePodSandbox \"0fb4c3d5bb813ca72ca92be4cf058d70e155f9fecf379d489caacf36a0627531\" returns successfully" Mar 17 17:39:34.070732 containerd[1432]: time="2025-03-17T17:39:34.070653044Z" level=info msg="StopPodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\"" Mar 17 17:39:34.070958 containerd[1432]: time="2025-03-17T17:39:34.070744923Z" level=info msg="TearDown network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" successfully" Mar 17 17:39:34.070958 containerd[1432]: time="2025-03-17T17:39:34.070754283Z" level=info msg="StopPodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" returns successfully" Mar 17 17:39:34.071692 containerd[1432]: time="2025-03-17T17:39:34.071167597Z" level=info msg="RemovePodSandbox for \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\"" Mar 17 17:39:34.071692 containerd[1432]: time="2025-03-17T17:39:34.071202997Z" level=info msg="Forcibly stopping sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\"" Mar 17 17:39:34.071692 containerd[1432]: time="2025-03-17T17:39:34.071273556Z" level=info msg="TearDown network for sandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" successfully" Mar 17 17:39:34.073681 containerd[1432]: time="2025-03-17T17:39:34.073642044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.073681 containerd[1432]: time="2025-03-17T17:39:34.073705643Z" level=info msg="RemovePodSandbox \"67fb93469d63b8d83d7b7d7a0ad149465635b76dd731b362161770d94d3db6c9\" returns successfully" Mar 17 17:39:34.074109 containerd[1432]: time="2025-03-17T17:39:34.074067079Z" level=info msg="StopPodSandbox for \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\"" Mar 17 17:39:34.074185 containerd[1432]: time="2025-03-17T17:39:34.074166717Z" level=info msg="TearDown network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\" successfully" Mar 17 17:39:34.074222 containerd[1432]: time="2025-03-17T17:39:34.074183237Z" level=info msg="StopPodSandbox for \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\" returns successfully" Mar 17 17:39:34.074562 containerd[1432]: time="2025-03-17T17:39:34.074536032Z" level=info msg="RemovePodSandbox for \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\"" Mar 17 17:39:34.075268 containerd[1432]: time="2025-03-17T17:39:34.074648751Z" level=info msg="Forcibly stopping sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\"" Mar 17 17:39:34.075268 containerd[1432]: time="2025-03-17T17:39:34.074717430Z" level=info msg="TearDown network for sandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\" successfully" Mar 17 17:39:34.076943 containerd[1432]: time="2025-03-17T17:39:34.076908921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:34.076993 containerd[1432]: time="2025-03-17T17:39:34.076971320Z" level=info msg="RemovePodSandbox \"c3977994b10969da4b1900d4dc674f6cf55404b6f9c5e11236566d76a314f2f5\" returns successfully" Mar 17 17:39:37.233699 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:58148.service - OpenSSH per-connection server daemon (10.0.0.1:58148). Mar 17 17:39:37.275668 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 58148 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:39:37.277158 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:37.281701 systemd-logind[1422]: New session 20 of user core. Mar 17 17:39:37.290898 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:39:37.425902 sshd[5513]: Connection closed by 10.0.0.1 port 58148 Mar 17 17:39:37.426266 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:37.429749 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:58148.service: Deactivated successfully. Mar 17 17:39:37.432788 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:39:37.433978 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:39:37.434935 systemd-logind[1422]: Removed session 20. Mar 17 17:39:38.755001 containerd[1432]: time="2025-03-17T17:39:38.754834086Z" level=info msg="StopContainer for \"5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520\" with timeout 300 (s)" Mar 17 17:39:38.757016 containerd[1432]: time="2025-03-17T17:39:38.756763141Z" level=info msg="Stop container \"5e2aefbfdc523c7dfa8a9f16994c1e96dd5484bbc206c2b385ea79592ba87520\" with signal terminated" Mar 17 17:39:38.861402 containerd[1432]: time="2025-03-17T17:39:38.860916157Z" level=info msg="StopContainer for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" with timeout 30 (s)" Mar 17 17:39:38.862035 containerd[1432]: time="2025-03-17T17:39:38.861997661Z" level=info msg="Stop container \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" with signal terminated" Mar 17 17:39:38.879569 systemd[1]: cri-containerd-886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f.scope: Deactivated successfully. Mar 17 17:39:38.908051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f-rootfs.mount: Deactivated successfully. Mar 17 17:39:38.923703 containerd[1432]: time="2025-03-17T17:39:38.916552432Z" level=info msg="shim disconnected" id=886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f namespace=k8s.io Mar 17 17:39:38.923918 containerd[1432]: time="2025-03-17T17:39:38.923706323Z" level=warning msg="cleaning up after shim disconnected" id=886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f namespace=k8s.io Mar 17 17:39:38.923918 containerd[1432]: time="2025-03-17T17:39:38.923723203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:38.943392 containerd[1432]: time="2025-03-17T17:39:38.943321825Z" level=info msg="StopContainer for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" with timeout 5 (s)" Mar 17 17:39:38.944675 containerd[1432]: time="2025-03-17T17:39:38.944203812Z" level=info msg="Stop container \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" with signal terminated" Mar 17 17:39:38.953983 containerd[1432]: time="2025-03-17T17:39:38.953932984Z" level=info msg="StopContainer for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" returns successfully" Mar 17 17:39:38.954528 containerd[1432]: time="2025-03-17T17:39:38.954496295Z" level=info msg="StopPodSandbox for \"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7\"" Mar 17 17:39:38.954572 containerd[1432]: time="2025-03-17T17:39:38.954536975Z" level=info msg="Container to stop \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:38.958133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7-shm.mount: Deactivated successfully. Mar 17 17:39:38.968743 systemd[1]: cri-containerd-c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7.scope: Deactivated successfully. Mar 17 17:39:38.974757 systemd[1]: cri-containerd-a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e.scope: Deactivated successfully. Mar 17 17:39:38.975213 systemd[1]: cri-containerd-a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e.scope: Consumed 1.666s CPU time. Mar 17 17:39:39.007013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e-rootfs.mount: Deactivated successfully. Mar 17 17:39:39.008385 containerd[1432]: time="2025-03-17T17:39:39.007747762Z" level=info msg="shim disconnected" id=a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e namespace=k8s.io Mar 17 17:39:39.008385 containerd[1432]: time="2025-03-17T17:39:39.007830479Z" level=warning msg="cleaning up after shim disconnected" id=a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e namespace=k8s.io Mar 17 17:39:39.008385 containerd[1432]: time="2025-03-17T17:39:39.008002113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:39.023031 containerd[1432]: time="2025-03-17T17:39:39.022131412Z" level=info msg="shim disconnected" id=c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7 namespace=k8s.io Mar 17 17:39:39.023031 containerd[1432]: time="2025-03-17T17:39:39.022191690Z" level=warning msg="cleaning up after shim disconnected" id=c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7 namespace=k8s.io Mar 17 17:39:39.023031 containerd[1432]: time="2025-03-17T17:39:39.022200209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:39.058143 containerd[1432]: time="2025-03-17T17:39:39.058079436Z" level=info msg="StopContainer for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" returns successfully" Mar 17 17:39:39.058784 containerd[1432]: time="2025-03-17T17:39:39.058758574Z" level=info msg="StopPodSandbox for \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\"" Mar 17 17:39:39.058853 containerd[1432]: time="2025-03-17T17:39:39.058794613Z" level=info msg="Container to stop \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:39.058853 containerd[1432]: time="2025-03-17T17:39:39.058805493Z" level=info msg="Container to stop \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:39.058853 containerd[1432]: time="2025-03-17T17:39:39.058815692Z" level=info msg="Container to stop \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:39.066311 systemd[1]: cri-containerd-ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af.scope: Deactivated successfully. Mar 17 17:39:39.087602 containerd[1432]: time="2025-03-17T17:39:39.087539233Z" level=info msg="shim disconnected" id=ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af namespace=k8s.io Mar 17 17:39:39.087602 containerd[1432]: time="2025-03-17T17:39:39.087597831Z" level=warning msg="cleaning up after shim disconnected" id=ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af namespace=k8s.io Mar 17 17:39:39.087772 containerd[1432]: time="2025-03-17T17:39:39.087615751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:39.117976 containerd[1432]: time="2025-03-17T17:39:39.117914680Z" level=info msg="TearDown network for sandbox \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" successfully" Mar 17 17:39:39.117976 containerd[1432]: time="2025-03-17T17:39:39.117958479Z" level=info msg="StopPodSandbox for \"ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af\" returns successfully" Mar 17 17:39:39.123209 systemd-networkd[1378]: cali212b5e08e92: Link DOWN Mar 17 17:39:39.123218 systemd-networkd[1378]: cali212b5e08e92: Lost carrier Mar 17 17:39:39.163425 kubelet[2512]: I0317 17:39:39.163375 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-lib-calico\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163425 kubelet[2512]: I0317 17:39:39.163422 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-net-dir\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163923 kubelet[2512]: I0317 17:39:39.163447 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08438b59-dd76-48a8-af38-c962e3ad9fc2-tigera-ca-bundle\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163923 kubelet[2512]: I0317 17:39:39.163463 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-bin-dir\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163923 kubelet[2512]: I0317 17:39:39.163478 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-flexvol-driver-host\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163923 kubelet[2512]: I0317 17:39:39.163499 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-log-dir\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163923 kubelet[2512]: I0317 17:39:39.163518 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-xtables-lock\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.163923 kubelet[2512]: I0317 17:39:39.163535 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-lib-modules\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.164062 kubelet[2512]: I0317 17:39:39.163550 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-run-calico\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.164062 kubelet[2512]: I0317 17:39:39.163570 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmktg\" (UniqueName: \"kubernetes.io/projected/08438b59-dd76-48a8-af38-c962e3ad9fc2-kube-api-access-jmktg\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.164062 kubelet[2512]: I0317 17:39:39.163584 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-policysync\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.164062 kubelet[2512]: I0317 17:39:39.163641 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08438b59-dd76-48a8-af38-c962e3ad9fc2-node-certs\") pod \"08438b59-dd76-48a8-af38-c962e3ad9fc2\" (UID: \"08438b59-dd76-48a8-af38-c962e3ad9fc2\") " Mar 17 17:39:39.180586 kubelet[2512]: I0317 17:39:39.179671 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.180586 kubelet[2512]: I0317 17:39:39.179745 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.184777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7-rootfs.mount: Deactivated successfully. Mar 17 17:39:39.184889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af-rootfs.mount: Deactivated successfully. Mar 17 17:39:39.184966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffbd13eafef0b14c7ce7408b9ee7f461a30a01dfdbbbca796fc0179e334775af-shm.mount: Deactivated successfully. Mar 17 17:39:39.185025 systemd[1]: var-lib-kubelet-pods-08438b59\x2ddd76\x2d48a8\x2daf38\x2dc962e3ad9fc2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmktg.mount: Deactivated successfully. Mar 17 17:39:39.187596 kubelet[2512]: I0317 17:39:39.187556 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-policysync" (OuterVolumeSpecName: "policysync") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.187596 kubelet[2512]: I0317 17:39:39.187598 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.187779 kubelet[2512]: I0317 17:39:39.187611 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.187779 kubelet[2512]: I0317 17:39:39.187634 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.187779 kubelet[2512]: I0317 17:39:39.187666 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.187779 kubelet[2512]: I0317 17:39:39.187685 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.188587 kubelet[2512]: E0317 17:39:39.188552 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="08438b59-dd76-48a8-af38-c962e3ad9fc2" containerName="calico-node" Mar 17 17:39:39.188587 kubelet[2512]: E0317 17:39:39.188583 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="08438b59-dd76-48a8-af38-c962e3ad9fc2" containerName="flexvol-driver" Mar 17 17:39:39.188587 kubelet[2512]: E0317 17:39:39.188590 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="08438b59-dd76-48a8-af38-c962e3ad9fc2" containerName="install-cni" Mar 17 17:39:39.188712 kubelet[2512]: I0317 17:39:39.188632 2512 memory_manager.go:354] "RemoveStaleState removing state" podUID="08438b59-dd76-48a8-af38-c962e3ad9fc2" containerName="calico-node" Mar 17 17:39:39.190293 systemd[1]: var-lib-kubelet-pods-08438b59\x2ddd76\x2d48a8\x2daf38\x2dc962e3ad9fc2-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 17 17:39:39.191032 kubelet[2512]: I0317 17:39:39.190995 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08438b59-dd76-48a8-af38-c962e3ad9fc2-node-certs" (OuterVolumeSpecName: "node-certs") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:39:39.191146 kubelet[2512]: I0317 17:39:39.191121 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08438b59-dd76-48a8-af38-c962e3ad9fc2-kube-api-access-jmktg" (OuterVolumeSpecName: "kube-api-access-jmktg") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "kube-api-access-jmktg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:39:39.213037 kubelet[2512]: I0317 17:39:39.212973 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08438b59-dd76-48a8-af38-c962e3ad9fc2-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:39:39.214201 kubelet[2512]: I0317 17:39:39.214060 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "08438b59-dd76-48a8-af38-c962e3ad9fc2" (UID: "08438b59-dd76-48a8-af38-c962e3ad9fc2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:39.217180 systemd[1]: var-lib-kubelet-pods-08438b59\x2ddd76\x2d48a8\x2daf38\x2dc962e3ad9fc2-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 17 17:39:39.218689 systemd[1]: Created slice kubepods-besteffort-pod5879db11_b808_43d1_8042_9d8c06b95cd3.slice - libcontainer container kubepods-besteffort-pod5879db11_b808_43d1_8042_9d8c06b95cd3.slice. Mar 17 17:39:39.264110 kubelet[2512]: I0317 17:39:39.263879 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-var-run-calico\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264110 kubelet[2512]: I0317 17:39:39.263969 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-var-lib-calico\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264110 kubelet[2512]: I0317 17:39:39.263991 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-policysync\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264110 kubelet[2512]: I0317 17:39:39.264018 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcx9m\" (UniqueName: \"kubernetes.io/projected/5879db11-b808-43d1-8042-9d8c06b95cd3-kube-api-access-jcx9m\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264110 kubelet[2512]: I0317 17:39:39.264036 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-cni-log-dir\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264391 kubelet[2512]: I0317 17:39:39.264053 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-flexvol-driver-host\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264391 kubelet[2512]: I0317 17:39:39.264068 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-xtables-lock\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264391 kubelet[2512]: I0317 17:39:39.264094 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-cni-bin-dir\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264391 kubelet[2512]: I0317 17:39:39.264109 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-cni-net-dir\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264391 kubelet[2512]: I0317 17:39:39.264132 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5879db11-b808-43d1-8042-9d8c06b95cd3-node-certs\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264153 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5879db11-b808-43d1-8042-9d8c06b95cd3-tigera-ca-bundle\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264173 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5879db11-b808-43d1-8042-9d8c06b95cd3-lib-modules\") pod \"calico-node-qgtlx\" (UID: \"5879db11-b808-43d1-8042-9d8c06b95cd3\") " pod="calico-system/calico-node-qgtlx" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264199 2512 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264209 2512 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264217 2512 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-run-calico\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264226 2512 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jmktg\" (UniqueName: \"kubernetes.io/projected/08438b59-dd76-48a8-af38-c962e3ad9fc2-kube-api-access-jmktg\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.264990 kubelet[2512]: I0317 17:39:39.264234 2512 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-policysync\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264242 2512 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08438b59-dd76-48a8-af38-c962e3ad9fc2-node-certs\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264251 2512 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264258 2512 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264267 2512 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08438b59-dd76-48a8-af38-c962e3ad9fc2-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264276 2512 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264283 2512 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.265408 kubelet[2512]: I0317 17:39:39.264291 2512 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08438b59-dd76-48a8-af38-c962e3ad9fc2-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.119 [INFO][5697] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.120 [INFO][5697] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" iface="eth0" netns="/var/run/netns/cni-9770e4c7-1f5a-2d4c-776d-295e2052b831" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.120 [INFO][5697] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" iface="eth0" netns="/var/run/netns/cni-9770e4c7-1f5a-2d4c-776d-295e2052b831" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.140 [INFO][5697] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" after=20.342255ms iface="eth0" netns="/var/run/netns/cni-9770e4c7-1f5a-2d4c-776d-295e2052b831" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.140 [INFO][5697] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.140 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.194 [INFO][5732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" HandleID="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Workload="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.194 [INFO][5732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.194 [INFO][5732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.260 [INFO][5732] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" HandleID="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Workload="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.263 [INFO][5732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" HandleID="k8s-pod-network.c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Workload="localhost-k8s-calico--kube--controllers--ff5ffdc75--plj5f-eth0" Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.267 [INFO][5732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:39.272599 containerd[1432]: 2025-03-17 17:39:39.269 [INFO][5697] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7" Mar 17 17:39:39.274881 systemd[1]: run-netns-cni\x2d9770e4c7\x2d1f5a\x2d2d4c\x2d776d\x2d295e2052b831.mount: Deactivated successfully. Mar 17 17:39:39.275123 containerd[1432]: time="2025-03-17T17:39:39.275075903Z" level=info msg="TearDown network for sandbox \"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7\" successfully" Mar 17 17:39:39.275123 containerd[1432]: time="2025-03-17T17:39:39.275122381Z" level=info msg="StopPodSandbox for \"c38e2988f9bd7bd62852ec7dd9b1e92cfec304f0b2f7415a3627eed5a16493b7\" returns successfully" Mar 17 17:39:39.364726 kubelet[2512]: I0317 17:39:39.364678 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-tigera-ca-bundle\") pod \"7dd0b36a-0dc5-4c34-a561-3245bd3255c4\" (UID: \"7dd0b36a-0dc5-4c34-a561-3245bd3255c4\") " Mar 17 17:39:39.364726 kubelet[2512]: I0317 17:39:39.364732 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pct4\" (UniqueName: \"kubernetes.io/projected/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-kube-api-access-4pct4\") pod \"7dd0b36a-0dc5-4c34-a561-3245bd3255c4\" (UID: \"7dd0b36a-0dc5-4c34-a561-3245bd3255c4\") " Mar 17 17:39:39.368028 kubelet[2512]: I0317 17:39:39.367290 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-kube-api-access-4pct4" (OuterVolumeSpecName: "kube-api-access-4pct4") pod "7dd0b36a-0dc5-4c34-a561-3245bd3255c4" (UID: "7dd0b36a-0dc5-4c34-a561-3245bd3255c4"). InnerVolumeSpecName "kube-api-access-4pct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:39:39.370501 kubelet[2512]: I0317 17:39:39.370434 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "7dd0b36a-0dc5-4c34-a561-3245bd3255c4" (UID: "7dd0b36a-0dc5-4c34-a561-3245bd3255c4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:39:39.372174 systemd[1]: var-lib-kubelet-pods-7dd0b36a\x2d0dc5\x2d4c34\x2da561\x2d3245bd3255c4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Mar 17 17:39:39.372300 systemd[1]: var-lib-kubelet-pods-7dd0b36a\x2d0dc5\x2d4c34\x2da561\x2d3245bd3255c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4pct4.mount: Deactivated successfully. Mar 17 17:39:39.402366 kubelet[2512]: I0317 17:39:39.402320 2512 scope.go:117] "RemoveContainer" containerID="a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e" Mar 17 17:39:39.404714 containerd[1432]: time="2025-03-17T17:39:39.404671906Z" level=info msg="RemoveContainer for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\"" Mar 17 17:39:39.408664 systemd[1]: Removed slice kubepods-besteffort-pod08438b59_dd76_48a8_af38_c962e3ad9fc2.slice - libcontainer container kubepods-besteffort-pod08438b59_dd76_48a8_af38_c962e3ad9fc2.slice. Mar 17 17:39:39.408771 systemd[1]: kubepods-besteffort-pod08438b59_dd76_48a8_af38_c962e3ad9fc2.slice: Consumed 2.169s CPU time. Mar 17 17:39:39.412330 containerd[1432]: time="2025-03-17T17:39:39.412266338Z" level=info msg="RemoveContainer for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" returns successfully" Mar 17 17:39:39.413346 kubelet[2512]: I0317 17:39:39.413215 2512 scope.go:117] "RemoveContainer" containerID="6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c" Mar 17 17:39:39.416269 containerd[1432]: time="2025-03-17T17:39:39.416016695Z" level=info msg="RemoveContainer for \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\"" Mar 17 17:39:39.417700 systemd[1]: Removed slice kubepods-besteffort-pod7dd0b36a_0dc5_4c34_a561_3245bd3255c4.slice - libcontainer container kubepods-besteffort-pod7dd0b36a_0dc5_4c34_a561_3245bd3255c4.slice. Mar 17 17:39:39.424673 containerd[1432]: time="2025-03-17T17:39:39.424460779Z" level=info msg="RemoveContainer for \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\" returns successfully" Mar 17 17:39:39.425194 kubelet[2512]: I0317 17:39:39.425169 2512 scope.go:117] "RemoveContainer" containerID="c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7" Mar 17 17:39:39.427722 containerd[1432]: time="2025-03-17T17:39:39.427677914Z" level=info msg="RemoveContainer for \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\"" Mar 17 17:39:39.438661 containerd[1432]: time="2025-03-17T17:39:39.438600397Z" level=info msg="RemoveContainer for \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\" returns successfully" Mar 17 17:39:39.438937 kubelet[2512]: I0317 17:39:39.438899 2512 scope.go:117] "RemoveContainer" containerID="a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e" Mar 17 17:39:39.439443 containerd[1432]: time="2025-03-17T17:39:39.439256536Z" level=error msg="ContainerStatus for \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\": not found" Mar 17 17:39:39.439750 kubelet[2512]: E0317 17:39:39.439678 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\": not found" containerID="a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e" Mar 17 17:39:39.440558 kubelet[2512]: I0317 17:39:39.439716 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e"} err="failed to get container status \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9249aa9bcfc82bb1fd438d0a5bb0706564c7fbaf583939cbe0ae3f88d64f92e\": not found" Mar 17 17:39:39.440558 kubelet[2512]: I0317 17:39:39.440423 2512 scope.go:117] "RemoveContainer" containerID="6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c" Mar 17 17:39:39.442553 containerd[1432]: time="2025-03-17T17:39:39.442485030Z" level=error msg="ContainerStatus for \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\": not found" Mar 17 17:39:39.442831 kubelet[2512]: E0317 17:39:39.442711 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\": not found" containerID="6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c" Mar 17 17:39:39.442831 kubelet[2512]: I0317 17:39:39.442747 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c"} err="failed to get container status \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6036170d6105fe7ae04db9cacfde35cfca5e95a0ad05a495149440fe994baf3c\": not found" Mar 17 17:39:39.442831 kubelet[2512]: I0317 17:39:39.442773 2512 scope.go:117] "RemoveContainer" containerID="c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7" Mar 17 17:39:39.443562 containerd[1432]: time="2025-03-17T17:39:39.442989413Z" level=error msg="ContainerStatus for \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\": not found" Mar 17 17:39:39.443675 kubelet[2512]: E0317 17:39:39.443212 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\": not found" containerID="c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7" Mar 17 17:39:39.443675 kubelet[2512]: I0317 17:39:39.443253 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7"} err="failed to get container status \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c82a85b270e8cf5a3d9cf1b2b35269ed357bb01c8d99f87e693a347580d25dd7\": not found" Mar 17 17:39:39.443675 kubelet[2512]: I0317 17:39:39.443276 2512 scope.go:117] "RemoveContainer" containerID="886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f" Mar 17 17:39:39.450112 containerd[1432]: time="2025-03-17T17:39:39.448055888Z" level=info msg="RemoveContainer for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\"" Mar 17 17:39:39.459868 containerd[1432]: time="2025-03-17T17:39:39.459570111Z" level=info msg="RemoveContainer for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" returns successfully" Mar 17 17:39:39.461545 kubelet[2512]: I0317 17:39:39.460777 2512 scope.go:117] "RemoveContainer" containerID="886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f" Mar 17 17:39:39.461545 kubelet[2512]: E0317 17:39:39.461363 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\": not found" containerID="886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f" Mar 17 17:39:39.461545 kubelet[2512]: I0317 17:39:39.461393 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f"} err="failed to get container status \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\": rpc error: code = NotFound desc = an error occurred when try to find container \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\": not found" Mar 17 17:39:39.461772 containerd[1432]: time="2025-03-17T17:39:39.461199898Z" level=error msg="ContainerStatus for \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"886500b2d309a94876479130c9284a0450279cdc068e69310df618fa5f03355f\": not found" Mar 17 17:39:39.465935 kubelet[2512]: I0317 17:39:39.465892 2512 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4pct4\" (UniqueName: \"kubernetes.io/projected/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-kube-api-access-4pct4\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.465935 kubelet[2512]: I0317 17:39:39.465924 2512 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dd0b36a-0dc5-4c34-a561-3245bd3255c4-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 17 17:39:39.525595 kubelet[2512]: E0317 17:39:39.525337 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"