Mar 19 11:27:40.906983 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:27:40.907003 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:27:40.907013 kernel: KASLR enabled Mar 19 11:27:40.907018 kernel: efi: EFI v2.7 by EDK II Mar 19 11:27:40.907024 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 19 11:27:40.907029 kernel: random: crng init done Mar 19 11:27:40.907036 kernel: secureboot: Secure boot disabled Mar 19 11:27:40.907042 kernel: ACPI: Early table checksum verification disabled Mar 19 11:27:40.907048 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 19 11:27:40.907055 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 19 11:27:40.907061 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907067 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907072 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907078 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907085 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907093 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907099 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907105 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907111 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:27:40.907117 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 19 11:27:40.907123 kernel: NUMA: Failed to initialise from firmware Mar 19 11:27:40.907129 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:27:40.907135 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 19 11:27:40.907141 kernel: Zone ranges: Mar 19 11:27:40.907147 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:27:40.907155 kernel: DMA32 empty Mar 19 11:27:40.907161 kernel: Normal empty Mar 19 11:27:40.907166 kernel: Movable zone start for each node Mar 19 11:27:40.907172 kernel: Early memory node ranges Mar 19 11:27:40.907178 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 19 11:27:40.907184 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 19 11:27:40.907190 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 19 11:27:40.907196 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 19 11:27:40.907202 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 19 11:27:40.907208 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 19 11:27:40.907214 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 19 11:27:40.907220 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 19 11:27:40.907227 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 19 11:27:40.907234 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:27:40.907240 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 19 11:27:40.907248 kernel: psci: probing for conduit method from ACPI. Mar 19 11:27:40.907255 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:27:40.907261 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:27:40.907268 kernel: psci: Trusted OS migration not required Mar 19 11:27:40.907275 kernel: psci: SMC Calling Convention v1.1 Mar 19 11:27:40.907281 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 19 11:27:40.907288 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:27:40.907294 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:27:40.907300 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 19 11:27:40.907306 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:27:40.907313 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:27:40.907319 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:27:40.907325 kernel: CPU features: detected: Spectre-v4 Mar 19 11:27:40.907333 kernel: CPU features: detected: Spectre-BHB Mar 19 11:27:40.907339 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:27:40.907346 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:27:40.907352 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:27:40.907369 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:27:40.907376 kernel: alternatives: applying boot alternatives Mar 19 11:27:40.907383 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:27:40.907390 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:27:40.907397 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:27:40.907403 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:27:40.907409 kernel: Fallback order for Node 0: 0 Mar 19 11:27:40.907418 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 19 11:27:40.907424 kernel: Policy zone: DMA Mar 19 11:27:40.907430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:27:40.907437 kernel: software IO TLB: area num 4. Mar 19 11:27:40.907443 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 19 11:27:40.907450 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Mar 19 11:27:40.907457 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 19 11:27:40.907463 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:27:40.907470 kernel: rcu: RCU event tracing is enabled. Mar 19 11:27:40.907477 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 19 11:27:40.907483 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:27:40.907490 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:27:40.907498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:27:40.907505 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 19 11:27:40.907511 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:27:40.907517 kernel: GICv3: 256 SPIs implemented Mar 19 11:27:40.907523 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:27:40.907530 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:27:40.907536 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:27:40.907542 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 19 11:27:40.907549 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 19 11:27:40.907555 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 19 11:27:40.907562 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 19 11:27:40.907569 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 19 11:27:40.907576 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 19 11:27:40.907582 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:27:40.907589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:27:40.907595 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:27:40.907602 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:27:40.907608 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:27:40.907615 kernel: arm-pv: using stolen time PV Mar 19 11:27:40.907621 kernel: Console: colour dummy device 80x25 Mar 19 11:27:40.907628 kernel: ACPI: Core revision 20230628 Mar 19 11:27:40.907635 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:27:40.907642 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:27:40.907649 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:27:40.907655 kernel: landlock: Up and running. Mar 19 11:27:40.907662 kernel: SELinux: Initializing. Mar 19 11:27:40.907668 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:27:40.907675 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:27:40.907682 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:27:40.907688 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:27:40.907695 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:27:40.907703 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:27:40.907709 kernel: Platform MSI: ITS@0x8080000 domain created Mar 19 11:27:40.907716 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 19 11:27:40.907722 kernel: Remapping and enabling EFI services. Mar 19 11:27:40.907729 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:27:40.907736 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:27:40.907742 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 19 11:27:40.907749 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 19 11:27:40.907755 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:27:40.907763 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:27:40.907770 kernel: Detected PIPT I-cache on CPU2 Mar 19 11:27:40.907781 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 19 11:27:40.907790 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 19 11:27:40.907797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:27:40.907803 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 19 11:27:40.907810 kernel: Detected PIPT I-cache on CPU3 Mar 19 11:27:40.907817 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 19 11:27:40.907824 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 19 11:27:40.907832 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:27:40.907839 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 19 11:27:40.907846 kernel: smp: Brought up 1 node, 4 CPUs Mar 19 11:27:40.907853 kernel: SMP: Total of 4 processors activated. Mar 19 11:27:40.907860 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:27:40.907874 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:27:40.907882 kernel: CPU features: detected: Common not Private translations Mar 19 11:27:40.907888 kernel: CPU features: detected: CRC32 instructions Mar 19 11:27:40.907897 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 19 11:27:40.907905 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:27:40.907912 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:27:40.907918 kernel: CPU features: detected: Privileged Access Never Mar 19 11:27:40.907926 kernel: CPU features: detected: RAS Extension Support Mar 19 11:27:40.907933 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 19 11:27:40.907940 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:27:40.907947 kernel: alternatives: applying system-wide alternatives Mar 19 11:27:40.907954 kernel: devtmpfs: initialized Mar 19 11:27:40.907963 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:27:40.907970 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 19 11:27:40.907977 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:27:40.907984 kernel: SMBIOS 3.0.0 present. Mar 19 11:27:40.907991 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 19 11:27:40.907998 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:27:40.908005 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:27:40.908012 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:27:40.908020 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:27:40.908028 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:27:40.908035 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 19 11:27:40.908042 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:27:40.908049 kernel: cpuidle: using governor menu Mar 19 11:27:40.908056 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:27:40.908063 kernel: ASID allocator initialised with 32768 entries Mar 19 11:27:40.908070 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:27:40.908077 kernel: Serial: AMBA PL011 UART driver Mar 19 11:27:40.908084 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:27:40.908092 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:27:40.908099 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:27:40.908105 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:27:40.908112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:27:40.908119 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:27:40.908126 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:27:40.908133 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:27:40.908145 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:27:40.908152 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:27:40.908160 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:27:40.908168 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:27:40.908174 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:27:40.908181 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:27:40.908188 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:27:40.908195 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:27:40.908202 kernel: ACPI: Interpreter enabled Mar 19 11:27:40.908209 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:27:40.908216 kernel: ACPI: MCFG table detected, 1 entries Mar 19 11:27:40.908223 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:27:40.908231 kernel: printk: console [ttyAMA0] enabled Mar 19 11:27:40.908238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 11:27:40.908389 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:27:40.908466 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 11:27:40.908530 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 11:27:40.908594 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 19 11:27:40.908656 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 19 11:27:40.908668 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 19 11:27:40.908675 kernel: PCI host bridge to bus 0000:00 Mar 19 11:27:40.908754 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 19 11:27:40.908820 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 19 11:27:40.908887 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 19 11:27:40.908946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 11:27:40.909024 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 19 11:27:40.909102 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 19 11:27:40.909168 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 19 11:27:40.909233 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 19 11:27:40.909297 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:27:40.909371 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:27:40.909437 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 19 11:27:40.909506 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 19 11:27:40.909564 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 19 11:27:40.909623 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 19 11:27:40.909680 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 19 11:27:40.909689 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 19 11:27:40.909696 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 19 11:27:40.909703 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 19 11:27:40.909710 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 19 11:27:40.909720 kernel: iommu: Default domain type: Translated Mar 19 11:27:40.909727 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:27:40.909734 kernel: efivars: Registered efivars operations Mar 19 11:27:40.909741 kernel: vgaarb: loaded Mar 19 11:27:40.909748 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:27:40.909755 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:27:40.909762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:27:40.909769 kernel: pnp: PnP ACPI init Mar 19 11:27:40.909846 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 19 11:27:40.909858 kernel: pnp: PnP ACPI: found 1 devices Mar 19 11:27:40.909873 kernel: NET: Registered PF_INET protocol family Mar 19 11:27:40.909880 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:27:40.909887 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:27:40.909894 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:27:40.909901 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:27:40.909908 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:27:40.909915 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:27:40.909924 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:27:40.909932 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:27:40.909938 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:27:40.909945 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:27:40.909952 kernel: kvm [1]: HYP mode not available Mar 19 11:27:40.909959 kernel: Initialise system trusted keyrings Mar 19 11:27:40.909966 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:27:40.909973 kernel: Key type asymmetric registered Mar 19 11:27:40.909980 kernel: Asymmetric key parser 'x509' registered Mar 19 11:27:40.909988 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:27:40.909995 kernel: io scheduler mq-deadline registered Mar 19 11:27:40.910001 kernel: io scheduler kyber registered Mar 19 11:27:40.910008 kernel: io scheduler bfq registered Mar 19 11:27:40.910015 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 19 11:27:40.910022 kernel: ACPI: button: Power Button [PWRB] Mar 19 11:27:40.910029 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 19 11:27:40.910102 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 19 11:27:40.910112 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:27:40.910121 kernel: thunder_xcv, ver 1.0 Mar 19 11:27:40.910128 kernel: thunder_bgx, ver 1.0 Mar 19 11:27:40.910135 kernel: nicpf, ver 1.0 Mar 19 11:27:40.910142 kernel: nicvf, ver 1.0 Mar 19 11:27:40.910223 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:27:40.910286 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:27:40 UTC (1742383660) Mar 19 11:27:40.910295 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:27:40.910302 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 19 11:27:40.910311 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:27:40.910318 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:27:40.910325 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:27:40.910332 kernel: Segment Routing with IPv6 Mar 19 11:27:40.910339 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:27:40.910346 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:27:40.910352 kernel: Key type dns_resolver registered Mar 19 11:27:40.910379 kernel: registered taskstats version 1 Mar 19 11:27:40.910386 kernel: Loading compiled-in X.509 certificates Mar 19 11:27:40.910393 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:27:40.910402 kernel: Key type .fscrypt registered Mar 19 11:27:40.910409 kernel: Key type fscrypt-provisioning registered Mar 19 11:27:40.910416 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:27:40.910423 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:27:40.910430 kernel: ima: No architecture policies found Mar 19 11:27:40.910437 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:27:40.910444 kernel: clk: Disabling unused clocks Mar 19 11:27:40.910451 kernel: Freeing unused kernel memory: 38336K Mar 19 11:27:40.910459 kernel: Run /init as init process Mar 19 11:27:40.910466 kernel: with arguments: Mar 19 11:27:40.910472 kernel: /init Mar 19 11:27:40.910479 kernel: with environment: Mar 19 11:27:40.910486 kernel: HOME=/ Mar 19 11:27:40.910493 kernel: TERM=linux Mar 19 11:27:40.910500 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:27:40.910508 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:27:40.910517 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:27:40.910527 systemd[1]: Detected virtualization kvm. Mar 19 11:27:40.910534 systemd[1]: Detected architecture arm64. Mar 19 11:27:40.910541 systemd[1]: Running in initrd. Mar 19 11:27:40.910548 systemd[1]: No hostname configured, using default hostname. Mar 19 11:27:40.910556 systemd[1]: Hostname set to . Mar 19 11:27:40.910563 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:27:40.910570 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:27:40.910579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:27:40.910587 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:27:40.910595 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:27:40.910602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:27:40.910610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:27:40.910618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:27:40.910626 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:27:40.910635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:27:40.910643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:27:40.910650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:27:40.910657 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:27:40.910665 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:27:40.910672 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:27:40.910679 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:27:40.910687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:27:40.910694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:27:40.910703 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:27:40.910710 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:27:40.910718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:27:40.910725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:27:40.910732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:27:40.910740 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:27:40.910747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:27:40.910754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:27:40.910763 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:27:40.910770 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:27:40.910778 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:27:40.910785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:27:40.910792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:27:40.910800 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:27:40.910807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:27:40.910816 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:27:40.910841 systemd-journald[238]: Collecting audit messages is disabled. Mar 19 11:27:40.910861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:27:40.910878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:27:40.910885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:27:40.910893 kernel: Bridge firewalling registered Mar 19 11:27:40.910900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:27:40.910908 systemd-journald[238]: Journal started Mar 19 11:27:40.910928 systemd-journald[238]: Runtime Journal (/run/log/journal/6e705010cde645789902d8dd2d5f5e69) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:27:40.889765 systemd-modules-load[239]: Inserted module 'overlay' Mar 19 11:27:40.910156 systemd-modules-load[239]: Inserted module 'br_netfilter' Mar 19 11:27:40.914298 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:27:40.916381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:27:40.919521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:27:40.920903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:27:40.922782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:27:40.924484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:27:40.931861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:27:40.933941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:27:40.935006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:27:40.945538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:27:40.946424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:27:40.948779 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:27:40.962398 dracut-cmdline[281]: dracut-dracut-053 Mar 19 11:27:40.963158 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:27:40.979774 systemd-resolved[274]: Positive Trust Anchors: Mar 19 11:27:40.979791 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:27:40.979823 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:27:40.984501 systemd-resolved[274]: Defaulting to hostname 'linux'. Mar 19 11:27:40.985464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:27:40.988205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:27:41.030380 kernel: SCSI subsystem initialized Mar 19 11:27:41.034368 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:27:41.041382 kernel: iscsi: registered transport (tcp) Mar 19 11:27:41.053428 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:27:41.053450 kernel: QLogic iSCSI HBA Driver Mar 19 11:27:41.092854 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:27:41.103480 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:27:41.119376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:27:41.119412 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:27:41.120381 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:27:41.166382 kernel: raid6: neonx8 gen() 15798 MB/s Mar 19 11:27:41.183371 kernel: raid6: neonx4 gen() 15817 MB/s Mar 19 11:27:41.200371 kernel: raid6: neonx2 gen() 13344 MB/s Mar 19 11:27:41.217381 kernel: raid6: neonx1 gen() 10500 MB/s Mar 19 11:27:41.234380 kernel: raid6: int64x8 gen() 6796 MB/s Mar 19 11:27:41.251382 kernel: raid6: int64x4 gen() 7354 MB/s Mar 19 11:27:41.268378 kernel: raid6: int64x2 gen() 6115 MB/s Mar 19 11:27:41.285382 kernel: raid6: int64x1 gen() 5062 MB/s Mar 19 11:27:41.285404 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Mar 19 11:27:41.302387 kernel: raid6: .... xor() 12424 MB/s, rmw enabled Mar 19 11:27:41.302402 kernel: raid6: using neon recovery algorithm Mar 19 11:27:41.307463 kernel: xor: measuring software checksum speed Mar 19 11:27:41.307478 kernel: 8regs : 21658 MB/sec Mar 19 11:27:41.307487 kernel: 32regs : 21693 MB/sec Mar 19 11:27:41.308385 kernel: arm64_neon : 27898 MB/sec Mar 19 11:27:41.308411 kernel: xor: using function: arm64_neon (27898 MB/sec) Mar 19 11:27:41.356384 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:27:41.366404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:27:41.377532 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:27:41.390501 systemd-udevd[463]: Using default interface naming scheme 'v255'. Mar 19 11:27:41.394133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:27:41.397382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:27:41.410449 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Mar 19 11:27:41.434414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:27:41.449549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:27:41.491326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:27:41.500512 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:27:41.511491 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:27:41.512992 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:27:41.514418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:27:41.515565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:27:41.521518 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:27:41.531300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:27:41.541602 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 19 11:27:41.545428 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 19 11:27:41.545531 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:27:41.545542 kernel: GPT:9289727 != 19775487 Mar 19 11:27:41.545556 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:27:41.545567 kernel: GPT:9289727 != 19775487 Mar 19 11:27:41.545576 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:27:41.545585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:27:41.544806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:27:41.544929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:27:41.549029 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:27:41.551223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:27:41.551369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:27:41.558261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:27:41.568647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:27:41.571711 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (523) Mar 19 11:27:41.576383 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) Mar 19 11:27:41.581090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:27:41.588557 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 11:27:41.604109 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 11:27:41.609915 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 11:27:41.610892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 11:27:41.618741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:27:41.625478 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:27:41.628126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:27:41.632229 disk-uuid[554]: Primary Header is updated. Mar 19 11:27:41.632229 disk-uuid[554]: Secondary Entries is updated. Mar 19 11:27:41.632229 disk-uuid[554]: Secondary Header is updated. Mar 19 11:27:41.635374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:27:41.656536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:27:42.648378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:27:42.648723 disk-uuid[555]: The operation has completed successfully. Mar 19 11:27:42.672980 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:27:42.673069 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:27:42.703483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:27:42.706181 sh[574]: Success Mar 19 11:27:42.719411 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:27:42.745984 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:27:42.763511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:27:42.765650 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:27:42.773979 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:27:42.774024 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:27:42.774044 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:27:42.775408 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:27:42.775424 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:27:42.778986 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:27:42.780009 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:27:42.787535 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:27:42.790184 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:27:42.797934 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:27:42.797966 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:27:42.797976 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:27:42.800386 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:27:42.808391 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:27:42.813347 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:27:42.822502 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:27:42.875780 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:27:42.893011 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:27:42.894294 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:27:42.920806 ignition[668]: Ignition 2.20.0 Mar 19 11:27:42.920815 ignition[668]: Stage: fetch-offline Mar 19 11:27:42.920854 ignition[668]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:27:42.920862 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:27:42.921020 ignition[668]: parsed url from cmdline: "" Mar 19 11:27:42.921023 ignition[668]: no config URL provided Mar 19 11:27:42.921027 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:27:42.921035 ignition[668]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:27:42.925498 systemd-networkd[766]: lo: Link UP Mar 19 11:27:42.921056 ignition[668]: op(1): [started] loading QEMU firmware config module Mar 19 11:27:42.925501 systemd-networkd[766]: lo: Gained carrier Mar 19 11:27:42.921060 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 19 11:27:42.926543 systemd-networkd[766]: Enumeration completed Mar 19 11:27:42.928173 ignition[668]: op(1): [finished] loading QEMU firmware config module Mar 19 11:27:42.927063 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:27:42.927067 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:27:42.927623 systemd-networkd[766]: eth0: Link UP Mar 19 11:27:42.927626 systemd-networkd[766]: eth0: Gained carrier Mar 19 11:27:42.927632 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:27:42.927645 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:27:42.930703 systemd[1]: Reached target network.target - Network. Mar 19 11:27:42.941392 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:27:42.972248 ignition[668]: parsing config with SHA512: 09207deece87897a263dc50778b48a887fc5ff6a97f321aaa05ac4f592eebd566b8f54fa3136198539468273fa308a41465de7b25acdcc61c764b44938c5d8d4 Mar 19 11:27:42.976448 unknown[668]: fetched base config from "system" Mar 19 11:27:42.976459 unknown[668]: fetched user config from "qemu" Mar 19 11:27:42.976881 ignition[668]: fetch-offline: fetch-offline passed Mar 19 11:27:42.976965 ignition[668]: Ignition finished successfully Mar 19 11:27:42.978655 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:27:42.979962 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 19 11:27:42.990519 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:27:43.001990 ignition[778]: Ignition 2.20.0 Mar 19 11:27:43.002001 ignition[778]: Stage: kargs Mar 19 11:27:43.002152 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:27:43.002162 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:27:43.002994 ignition[778]: kargs: kargs passed Mar 19 11:27:43.004716 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:27:43.003038 ignition[778]: Ignition finished successfully Mar 19 11:27:43.006820 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:27:43.019618 ignition[787]: Ignition 2.20.0 Mar 19 11:27:43.019627 ignition[787]: Stage: disks Mar 19 11:27:43.019774 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:27:43.019782 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:27:43.022760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:27:43.020643 ignition[787]: disks: disks passed Mar 19 11:27:43.023599 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:27:43.020681 ignition[787]: Ignition finished successfully Mar 19 11:27:43.024801 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:27:43.026028 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:27:43.027323 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:27:43.028491 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:27:43.034542 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:27:43.047520 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:27:43.051005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:27:43.056453 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:27:43.102373 kernel: EXT4-fs (vda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:27:43.103176 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:27:43.104265 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:27:43.122451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:27:43.123998 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:27:43.124910 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:27:43.124965 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:27:43.124989 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:27:43.129637 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:27:43.131048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:27:43.152082 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (805) Mar 19 11:27:43.152127 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:27:43.152147 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:27:43.152796 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:27:43.155370 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:27:43.156157 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:27:43.187316 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:27:43.190473 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:27:43.194279 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:27:43.197797 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:27:43.268594 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:27:43.277463 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:27:43.278792 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:27:43.283375 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:27:43.298761 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:27:43.301230 ignition[918]: INFO : Ignition 2.20.0 Mar 19 11:27:43.301230 ignition[918]: INFO : Stage: mount Mar 19 11:27:43.302402 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:27:43.302402 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:27:43.302402 ignition[918]: INFO : mount: mount passed Mar 19 11:27:43.302402 ignition[918]: INFO : Ignition finished successfully Mar 19 11:27:43.304434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:27:43.319444 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:27:43.895402 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:27:43.909605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:27:43.914387 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (931) Mar 19 11:27:43.916710 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:27:43.916739 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:27:43.916767 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:27:43.918374 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:27:43.919645 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:27:43.934232 ignition[949]: INFO : Ignition 2.20.0 Mar 19 11:27:43.934232 ignition[949]: INFO : Stage: files Mar 19 11:27:43.935439 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:27:43.935439 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:27:43.935439 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:27:43.937973 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:27:43.937973 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:27:43.939930 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:27:43.939930 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:27:43.939930 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:27:43.938960 unknown[949]: wrote ssh authorized keys file for user: core Mar 19 11:27:43.943402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:27:43.943402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 19 11:27:44.012483 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:27:44.139020 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:27:44.140685 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 19 11:27:44.585392 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 19 11:27:44.719992 systemd-networkd[766]: eth0: Gained IPv6LL Mar 19 11:27:45.123275 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:27:45.123275 ignition[949]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 19 11:27:45.126051 ignition[949]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 19 11:27:45.141501 ignition[949]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:27:45.144220 ignition[949]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:27:45.145306 ignition[949]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 19 11:27:45.145306 ignition[949]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:27:45.145306 ignition[949]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:27:45.145306 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:27:45.145306 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:27:45.145306 ignition[949]: INFO : files: files passed Mar 19 11:27:45.145306 ignition[949]: INFO : Ignition finished successfully Mar 19 11:27:45.146221 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:27:45.152540 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:27:45.153899 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:27:45.156705 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:27:45.157410 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:27:45.160600 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Mar 19 11:27:45.163518 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:27:45.163518 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:27:45.166332 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:27:45.166566 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:27:45.168653 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:27:45.181549 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:27:45.197647 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:27:45.197741 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:27:45.199508 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:27:45.201711 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:27:45.202508 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:27:45.203217 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:27:45.217396 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:27:45.225544 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:27:45.233857 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:27:45.234765 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:27:45.236247 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:27:45.237582 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:27:45.237710 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:27:45.239698 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:27:45.241147 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:27:45.242467 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:27:45.243802 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:27:45.245228 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:27:45.246654 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:27:45.247969 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:27:45.249340 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:27:45.250763 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:27:45.252008 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:27:45.253077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:27:45.253192 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:27:45.254998 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:27:45.256325 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:27:45.257868 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:27:45.259367 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:27:45.261228 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:27:45.261342 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:27:45.263502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:27:45.263622 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:27:45.265227 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:27:45.266439 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:27:45.271410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:27:45.272340 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:27:45.273893 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:27:45.275021 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:27:45.275105 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:27:45.276205 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:27:45.276281 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:27:45.277440 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:27:45.277547 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:27:45.278835 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:27:45.278941 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:27:45.292537 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:27:45.293913 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:27:45.294559 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:27:45.294672 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:27:45.295987 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:27:45.296078 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:27:45.301886 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:27:45.302049 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:27:45.304472 ignition[1003]: INFO : Ignition 2.20.0 Mar 19 11:27:45.304472 ignition[1003]: INFO : Stage: umount Mar 19 11:27:45.306436 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:27:45.306436 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:27:45.306436 ignition[1003]: INFO : umount: umount passed Mar 19 11:27:45.306436 ignition[1003]: INFO : Ignition finished successfully Mar 19 11:27:45.306938 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:27:45.307055 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:27:45.308090 systemd[1]: Stopped target network.target - Network. Mar 19 11:27:45.309527 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:27:45.309582 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:27:45.310813 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:27:45.310864 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:27:45.312194 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:27:45.312235 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:27:45.313556 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:27:45.313600 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:27:45.315069 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:27:45.316148 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:27:45.318254 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:27:45.330887 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:27:45.331005 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:27:45.334448 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:27:45.334628 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:27:45.334730 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:27:45.336951 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:27:45.337562 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:27:45.337606 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:27:45.353477 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:27:45.354141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:27:45.354210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:27:45.355628 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:27:45.355668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:27:45.357888 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:27:45.357933 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:27:45.359283 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:27:45.359325 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:27:45.361460 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:27:45.362999 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:27:45.363056 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:27:45.370165 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:27:45.370279 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:27:45.380850 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:27:45.380989 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:27:45.382346 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:27:45.382436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:27:45.385298 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:27:45.385446 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:27:45.387062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:27:45.387097 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:27:45.388408 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:27:45.388438 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:27:45.389793 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:27:45.389842 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:27:45.391755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:27:45.391796 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:27:45.393672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:27:45.393709 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:27:45.409501 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:27:45.410259 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:27:45.410309 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:27:45.412590 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:27:45.412629 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:27:45.414278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:27:45.414317 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:27:45.415800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:27:45.415846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:27:45.418660 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:27:45.418710 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:27:45.418969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:27:45.419043 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:27:45.420742 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:27:45.422478 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:27:45.431185 systemd[1]: Switching root. Mar 19 11:27:45.464052 systemd-journald[238]: Journal stopped Mar 19 11:27:46.189329 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Mar 19 11:27:46.189409 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:27:46.189422 kernel: SELinux: policy capability open_perms=1 Mar 19 11:27:46.189435 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:27:46.189445 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:27:46.189454 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:27:46.189463 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:27:46.189472 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:27:46.189481 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:27:46.189490 kernel: audit: type=1403 audit(1742383665.611:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:27:46.189500 systemd[1]: Successfully loaded SELinux policy in 39.022ms. Mar 19 11:27:46.189521 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.200ms. Mar 19 11:27:46.189536 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:27:46.189548 systemd[1]: Detected virtualization kvm. Mar 19 11:27:46.189559 systemd[1]: Detected architecture arm64. Mar 19 11:27:46.189569 systemd[1]: Detected first boot. Mar 19 11:27:46.189579 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:27:46.189589 zram_generator::config[1052]: No configuration found. Mar 19 11:27:46.189601 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:27:46.189610 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:27:46.189623 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:27:46.189633 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:27:46.189643 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:27:46.189653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:27:46.189663 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:27:46.189673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:27:46.189683 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:27:46.189693 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:27:46.189705 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:27:46.189716 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:27:46.189726 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:27:46.189736 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:27:46.189747 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:27:46.189757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:27:46.189769 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:27:46.189779 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:27:46.189789 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:27:46.189801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:27:46.189811 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:27:46.189829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:27:46.189840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:27:46.189850 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:27:46.189860 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:27:46.189870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:27:46.189883 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:27:46.189893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:27:46.189903 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:27:46.189913 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:27:46.189923 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:27:46.189933 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:27:46.189943 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:27:46.189953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:27:46.189964 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:27:46.189974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:27:46.189986 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:27:46.189997 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:27:46.190007 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:27:46.190017 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:27:46.190027 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:27:46.190037 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:27:46.190047 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:27:46.190057 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:27:46.190069 systemd[1]: Reached target machines.target - Containers. Mar 19 11:27:46.190079 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:27:46.190089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:27:46.190099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:27:46.190109 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:27:46.190120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:27:46.190129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:27:46.190144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:27:46.190154 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:27:46.190166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:27:46.190177 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:27:46.190199 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:27:46.190210 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:27:46.190220 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:27:46.190231 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:27:46.190242 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:27:46.190252 kernel: fuse: init (API version 7.39) Mar 19 11:27:46.190263 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:27:46.190273 kernel: loop: module loaded Mar 19 11:27:46.190282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:27:46.190292 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:27:46.190302 kernel: ACPI: bus type drm_connector registered Mar 19 11:27:46.190312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:27:46.190322 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:27:46.190333 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:27:46.190343 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:27:46.190354 systemd[1]: Stopped verity-setup.service. Mar 19 11:27:46.190391 systemd-journald[1124]: Collecting audit messages is disabled. Mar 19 11:27:46.190413 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:27:46.190425 systemd-journald[1124]: Journal started Mar 19 11:27:46.190447 systemd-journald[1124]: Runtime Journal (/run/log/journal/6e705010cde645789902d8dd2d5f5e69) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:27:46.014373 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:27:46.025198 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 11:27:46.025574 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:27:46.192386 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:27:46.192809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:27:46.193827 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:27:46.194701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:27:46.195668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:27:46.196739 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:27:46.198445 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:27:46.201412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:27:46.202634 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:27:46.202796 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:27:46.203963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:27:46.204121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:27:46.206728 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:27:46.206895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:27:46.207957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:27:46.208112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:27:46.209450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:27:46.209615 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:27:46.210870 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:27:46.211042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:27:46.212186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:27:46.213388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:27:46.214589 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:27:46.215885 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:27:46.227682 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:27:46.237445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:27:46.239226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:27:46.240130 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:27:46.240168 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:27:46.241873 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:27:46.243804 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:27:46.248341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:27:46.249326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:27:46.250588 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:27:46.252277 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:27:46.253188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:27:46.254563 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:27:46.255332 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:27:46.261530 systemd-journald[1124]: Time spent on flushing to /var/log/journal/6e705010cde645789902d8dd2d5f5e69 is 25.736ms for 868 entries. Mar 19 11:27:46.261530 systemd-journald[1124]: System Journal (/var/log/journal/6e705010cde645789902d8dd2d5f5e69) is 8M, max 195.6M, 187.6M free. Mar 19 11:27:46.294396 systemd-journald[1124]: Received client request to flush runtime journal. Mar 19 11:27:46.294435 kernel: loop0: detected capacity change from 0 to 113512 Mar 19 11:27:46.264582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:27:46.268466 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:27:46.273553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:27:46.276982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:27:46.280286 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:27:46.282525 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:27:46.284056 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:27:46.285881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:27:46.288290 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:27:46.297390 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:27:46.297591 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:27:46.302557 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:27:46.303868 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:27:46.306141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:27:46.313165 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Mar 19 11:27:46.313183 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Mar 19 11:27:46.316559 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:27:46.319811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:27:46.322076 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:27:46.326388 kernel: loop1: detected capacity change from 0 to 189592 Mar 19 11:27:46.328657 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:27:46.347900 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:27:46.356425 kernel: loop2: detected capacity change from 0 to 123192 Mar 19 11:27:46.358530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:27:46.371316 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 19 11:27:46.371334 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 19 11:27:46.376398 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:27:46.391409 kernel: loop3: detected capacity change from 0 to 113512 Mar 19 11:27:46.397376 kernel: loop4: detected capacity change from 0 to 189592 Mar 19 11:27:46.403391 kernel: loop5: detected capacity change from 0 to 123192 Mar 19 11:27:46.407021 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 19 11:27:46.407770 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 19 11:27:46.411989 systemd[1]: Reload requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:27:46.412227 systemd[1]: Reloading... Mar 19 11:27:46.484399 zram_generator::config[1221]: No configuration found. Mar 19 11:27:46.523097 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:27:46.578710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:27:46.628124 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:27:46.628350 systemd[1]: Reloading finished in 215 ms. Mar 19 11:27:46.646960 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:27:46.648142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:27:46.660528 systemd[1]: Starting ensure-sysext.service... Mar 19 11:27:46.662108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:27:46.677520 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:27:46.677537 systemd[1]: Reloading... Mar 19 11:27:46.679265 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:27:46.679758 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:27:46.680583 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:27:46.680892 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 19 11:27:46.681029 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 19 11:27:46.683723 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:27:46.683847 systemd-tmpfiles[1259]: Skipping /boot Mar 19 11:27:46.692531 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:27:46.692614 systemd-tmpfiles[1259]: Skipping /boot Mar 19 11:27:46.722382 zram_generator::config[1285]: No configuration found. Mar 19 11:27:46.811302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:27:46.860519 systemd[1]: Reloading finished in 182 ms. Mar 19 11:27:46.872519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:27:46.886648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:27:46.893634 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:27:46.895951 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:27:46.898082 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:27:46.902736 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:27:46.909502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:27:46.913237 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:27:46.918609 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:27:46.922420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:27:46.923725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:27:46.925752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:27:46.932635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:27:46.933569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:27:46.933706 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:27:46.934947 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:27:46.937706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:27:46.945117 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Mar 19 11:27:46.946513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:27:46.946745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:27:46.949853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:27:46.950039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:27:46.951435 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:27:46.951585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:27:46.954756 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:27:46.962668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:27:46.972623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:27:46.975564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:27:46.979638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:27:46.980459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:27:46.980573 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:27:46.981300 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:27:46.986476 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:27:46.987843 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:27:46.989186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:27:46.989339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:27:46.990569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:27:46.990714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:27:46.992047 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:27:46.992198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:27:46.993378 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:27:47.004965 systemd[1]: Finished ensure-sysext.service. Mar 19 11:27:47.008078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:27:47.010412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:27:47.018370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:27:47.021797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:27:47.028287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:27:47.031673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:27:47.031727 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:27:47.037006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:27:47.039072 augenrules[1394]: No rules Mar 19 11:27:47.046581 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:27:47.048543 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:27:47.060380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1360) Mar 19 11:27:47.075584 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:27:47.076566 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:27:47.078187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:27:47.078649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:27:47.080220 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:27:47.080489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:27:47.081894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:27:47.082288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:27:47.083723 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:27:47.083874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:27:47.093146 systemd-resolved[1327]: Positive Trust Anchors: Mar 19 11:27:47.093161 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:27:47.093191 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:27:47.094371 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:27:47.105108 systemd-resolved[1327]: Defaulting to hostname 'linux'. Mar 19 11:27:47.107334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:27:47.107471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:27:47.112679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:27:47.115986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:27:47.117276 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:27:47.125532 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:27:47.135846 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:27:47.167887 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:27:47.168937 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:27:47.179868 systemd-networkd[1401]: lo: Link UP Mar 19 11:27:47.180013 systemd-networkd[1401]: lo: Gained carrier Mar 19 11:27:47.182707 systemd-networkd[1401]: Enumeration completed Mar 19 11:27:47.187686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:27:47.188631 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:27:47.189756 systemd[1]: Reached target network.target - Network. Mar 19 11:27:47.191673 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:27:47.194905 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:27:47.195535 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:27:47.195539 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:27:47.196152 systemd-networkd[1401]: eth0: Link UP Mar 19 11:27:47.196155 systemd-networkd[1401]: eth0: Gained carrier Mar 19 11:27:47.196168 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:27:47.205763 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:27:47.215404 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:27:47.216531 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Mar 19 11:27:47.217248 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 19 11:27:47.217291 systemd-timesyncd[1403]: Initial clock synchronization to Wed 2025-03-19 11:27:46.971861 UTC. Mar 19 11:27:47.218523 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:27:47.219938 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:27:47.228877 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:27:47.240210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:27:47.272993 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:27:47.274183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:27:47.276477 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:27:47.277299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:27:47.278338 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:27:47.279424 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:27:47.280284 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:27:47.281393 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:27:47.282261 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:27:47.282292 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:27:47.282975 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:27:47.284571 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:27:47.286744 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:27:47.289759 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:27:47.290877 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:27:47.291830 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:27:47.294748 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:27:47.295981 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:27:47.298020 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:27:47.299636 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:27:47.300557 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:27:47.301281 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:27:47.302003 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:27:47.302033 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:27:47.302973 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:27:47.304700 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:27:47.307505 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:27:47.307739 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:27:47.313327 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:27:47.314778 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:27:47.318556 jq[1439]: false Mar 19 11:27:47.318729 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:27:47.321089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:27:47.325697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:27:47.327398 extend-filesystems[1440]: Found loop3 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found loop4 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found loop5 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda1 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda2 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda3 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found usr Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda4 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda6 Mar 19 11:27:47.327398 extend-filesystems[1440]: Found vda7 Mar 19 11:27:47.348208 extend-filesystems[1440]: Found vda9 Mar 19 11:27:47.348208 extend-filesystems[1440]: Checking size of /dev/vda9 Mar 19 11:27:47.337034 dbus-daemon[1438]: [system] SELinux support is enabled Mar 19 11:27:47.328411 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:27:47.352462 extend-filesystems[1440]: Resized partition /dev/vda9 Mar 19 11:27:47.336935 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:27:47.338669 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:27:47.339185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:27:47.341119 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:27:47.343307 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:27:47.356240 jq[1457]: true Mar 19 11:27:47.344755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:27:47.350285 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:27:47.354729 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:27:47.354958 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:27:47.355216 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:27:47.355398 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:27:47.357872 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:27:47.358051 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:27:47.368398 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:27:47.373970 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:27:47.375221 tar[1463]: linux-arm64/helm Mar 19 11:27:47.374005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:27:47.381847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1381) Mar 19 11:27:47.378957 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:27:47.378984 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:27:47.383386 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 19 11:27:47.394676 update_engine[1455]: I20250319 11:27:47.394495 1455 main.cc:92] Flatcar Update Engine starting Mar 19 11:27:47.400400 jq[1464]: true Mar 19 11:27:47.399957 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:27:47.402560 update_engine[1455]: I20250319 11:27:47.402314 1455 update_check_scheduler.cc:74] Next update check in 8m9s Mar 19 11:27:47.404375 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:27:47.408226 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:27:47.416398 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 19 11:27:47.435345 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (Power Button) Mar 19 11:27:47.435716 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 11:27:47.435716 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:27:47.435716 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 19 11:27:47.442634 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Mar 19 11:27:47.435882 systemd-logind[1451]: New seat seat0. Mar 19 11:27:47.438042 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:27:47.442763 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:27:47.444895 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:27:47.478158 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:27:47.484426 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:27:47.488745 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:27:47.514782 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:27:47.630095 containerd[1468]: time="2025-03-19T11:27:47.630014360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:27:47.661590 containerd[1468]: time="2025-03-19T11:27:47.661528400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663171440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663340920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663394920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663550320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663574400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663635520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:27:47.663887 containerd[1468]: time="2025-03-19T11:27:47.663652840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.664297 containerd[1468]: time="2025-03-19T11:27:47.664203280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:27:47.664479 containerd[1468]: time="2025-03-19T11:27:47.664416400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.664616 containerd[1468]: time="2025-03-19T11:27:47.664595600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:27:47.664743 containerd[1468]: time="2025-03-19T11:27:47.664725320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.665068 containerd[1468]: time="2025-03-19T11:27:47.665046840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.665563 containerd[1468]: time="2025-03-19T11:27:47.665531840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:27:47.666325 containerd[1468]: time="2025-03-19T11:27:47.666019960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:27:47.666325 containerd[1468]: time="2025-03-19T11:27:47.666049640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:27:47.666325 containerd[1468]: time="2025-03-19T11:27:47.666141080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:27:47.666325 containerd[1468]: time="2025-03-19T11:27:47.666188200Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:27:47.671312 containerd[1468]: time="2025-03-19T11:27:47.671286920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671425760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671447800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671464800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671478880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671620360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671847160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671945240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671962600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671982520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.671995640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.672007680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.672019720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.672034240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672394 containerd[1468]: time="2025-03-19T11:27:47.672050800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672063960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672077920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672089160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672108920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672122520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672133640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672145480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672159840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672172200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672183520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672195560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672208280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672221600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672670 containerd[1468]: time="2025-03-19T11:27:47.672232920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672901 containerd[1468]: time="2025-03-19T11:27:47.672244560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672901 containerd[1468]: time="2025-03-19T11:27:47.672258400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672901 containerd[1468]: time="2025-03-19T11:27:47.672273120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:27:47.672901 containerd[1468]: time="2025-03-19T11:27:47.672293600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672901 containerd[1468]: time="2025-03-19T11:27:47.672306320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.672901 containerd[1468]: time="2025-03-19T11:27:47.672318240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:27:47.673105 containerd[1468]: time="2025-03-19T11:27:47.673061560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:27:47.673216 containerd[1468]: time="2025-03-19T11:27:47.673100680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:27:47.673216 containerd[1468]: time="2025-03-19T11:27:47.673178680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:27:47.673216 containerd[1468]: time="2025-03-19T11:27:47.673193560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:27:47.673216 containerd[1468]: time="2025-03-19T11:27:47.673203360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.673216 containerd[1468]: time="2025-03-19T11:27:47.673217560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:27:47.673325 containerd[1468]: time="2025-03-19T11:27:47.673228080Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:27:47.673325 containerd[1468]: time="2025-03-19T11:27:47.673238600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:27:47.673582 containerd[1468]: time="2025-03-19T11:27:47.673532480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:27:47.673582 containerd[1468]: time="2025-03-19T11:27:47.673584440Z" level=info msg="Connect containerd service" Mar 19 11:27:47.673721 containerd[1468]: time="2025-03-19T11:27:47.673618480Z" level=info msg="using legacy CRI server" Mar 19 11:27:47.673721 containerd[1468]: time="2025-03-19T11:27:47.673625800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:27:47.673877 containerd[1468]: time="2025-03-19T11:27:47.673857040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:27:47.674535 containerd[1468]: time="2025-03-19T11:27:47.674506680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:27:47.674727 containerd[1468]: time="2025-03-19T11:27:47.674701640Z" level=info msg="Start subscribing containerd event" Mar 19 11:27:47.674988 containerd[1468]: time="2025-03-19T11:27:47.674743440Z" level=info msg="Start recovering state" Mar 19 11:27:47.674988 containerd[1468]: time="2025-03-19T11:27:47.674825640Z" level=info msg="Start event monitor" Mar 19 11:27:47.674988 containerd[1468]: time="2025-03-19T11:27:47.674844160Z" level=info msg="Start snapshots syncer" Mar 19 11:27:47.674988 containerd[1468]: time="2025-03-19T11:27:47.674853360Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:27:47.674988 containerd[1468]: time="2025-03-19T11:27:47.674860240Z" level=info msg="Start streaming server" Mar 19 11:27:47.675471 containerd[1468]: time="2025-03-19T11:27:47.675447320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:27:47.675517 containerd[1468]: time="2025-03-19T11:27:47.675495200Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:27:47.678079 containerd[1468]: time="2025-03-19T11:27:47.678056440Z" level=info msg="containerd successfully booted in 0.051266s" Mar 19 11:27:47.678127 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:27:47.757313 tar[1463]: linux-arm64/LICENSE Mar 19 11:27:47.757313 tar[1463]: linux-arm64/README.md Mar 19 11:27:47.779448 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:27:48.495457 systemd-networkd[1401]: eth0: Gained IPv6LL Mar 19 11:27:48.497611 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:27:48.502082 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:27:48.517637 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 19 11:27:48.520348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:27:48.522436 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:27:48.536065 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 19 11:27:48.536237 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 19 11:27:48.539154 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:27:48.543947 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:27:48.717707 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:27:48.737503 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:27:48.749591 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:27:48.754186 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:27:48.754385 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:27:48.759197 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:27:48.771104 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:27:48.780775 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:27:48.782737 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:27:48.784025 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:27:48.989986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:27:48.991485 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:27:48.993116 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:27:48.993266 systemd[1]: Startup finished in 514ms (kernel) + 4.909s (initrd) + 3.424s (userspace) = 8.848s. Mar 19 11:27:49.396437 kubelet[1552]: E0319 11:27:49.396385 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:27:49.398231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:27:49.398390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:27:49.398706 systemd[1]: kubelet.service: Consumed 772ms CPU time, 233.4M memory peak. Mar 19 11:27:53.548599 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:27:53.549677 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:53526.service - OpenSSH per-connection server daemon (10.0.0.1:53526). Mar 19 11:27:53.609426 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 53526 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:53.610988 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:53.622206 systemd-logind[1451]: New session 1 of user core. Mar 19 11:27:53.623116 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:27:53.630614 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:27:53.638707 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:27:53.642569 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:27:53.646245 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:27:53.648172 systemd-logind[1451]: New session c1 of user core. Mar 19 11:27:53.738852 systemd[1569]: Queued start job for default target default.target. Mar 19 11:27:53.747259 systemd[1569]: Created slice app.slice - User Application Slice. Mar 19 11:27:53.747290 systemd[1569]: Reached target paths.target - Paths. Mar 19 11:27:53.747329 systemd[1569]: Reached target timers.target - Timers. Mar 19 11:27:53.748571 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:27:53.756951 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:27:53.757013 systemd[1569]: Reached target sockets.target - Sockets. Mar 19 11:27:53.757050 systemd[1569]: Reached target basic.target - Basic System. Mar 19 11:27:53.757082 systemd[1569]: Reached target default.target - Main User Target. Mar 19 11:27:53.757107 systemd[1569]: Startup finished in 104ms. Mar 19 11:27:53.757239 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:27:53.758642 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:27:53.817493 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:53530.service - OpenSSH per-connection server daemon (10.0.0.1:53530). Mar 19 11:27:53.859485 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:53.860633 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:53.864434 systemd-logind[1451]: New session 2 of user core. Mar 19 11:27:53.871513 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:27:53.921204 sshd[1582]: Connection closed by 10.0.0.1 port 53530 Mar 19 11:27:53.921083 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Mar 19 11:27:53.933445 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:53530.service: Deactivated successfully. Mar 19 11:27:53.934952 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:27:53.936410 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:27:53.937542 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:53538.service - OpenSSH per-connection server daemon (10.0.0.1:53538). Mar 19 11:27:53.938708 systemd-logind[1451]: Removed session 2. Mar 19 11:27:53.979315 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 53538 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:53.980617 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:53.985280 systemd-logind[1451]: New session 3 of user core. Mar 19 11:27:53.995503 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:27:54.042963 sshd[1590]: Connection closed by 10.0.0.1 port 53538 Mar 19 11:27:54.043351 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Mar 19 11:27:54.055320 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:53538.service: Deactivated successfully. Mar 19 11:27:54.056748 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:27:54.057382 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:27:54.066750 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:53554.service - OpenSSH per-connection server daemon (10.0.0.1:53554). Mar 19 11:27:54.069671 systemd-logind[1451]: Removed session 3. Mar 19 11:27:54.104931 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 53554 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:54.105938 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:54.109863 systemd-logind[1451]: New session 4 of user core. Mar 19 11:27:54.117485 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:27:54.166854 sshd[1598]: Connection closed by 10.0.0.1 port 53554 Mar 19 11:27:54.167099 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Mar 19 11:27:54.183118 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:53554.service: Deactivated successfully. Mar 19 11:27:54.186609 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:27:54.187781 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:27:54.188845 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:53562.service - OpenSSH per-connection server daemon (10.0.0.1:53562). Mar 19 11:27:54.189559 systemd-logind[1451]: Removed session 4. Mar 19 11:27:54.229590 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 53562 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:54.230693 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:54.234691 systemd-logind[1451]: New session 5 of user core. Mar 19 11:27:54.247480 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:27:54.302304 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:27:54.302589 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:27:54.316152 sudo[1607]: pam_unix(sudo:session): session closed for user root Mar 19 11:27:54.317398 sshd[1606]: Connection closed by 10.0.0.1 port 53562 Mar 19 11:27:54.317898 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Mar 19 11:27:54.331331 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:53562.service: Deactivated successfully. Mar 19 11:27:54.332597 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:27:54.333199 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:27:54.343622 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:53572.service - OpenSSH per-connection server daemon (10.0.0.1:53572). Mar 19 11:27:54.344497 systemd-logind[1451]: Removed session 5. Mar 19 11:27:54.381349 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 53572 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:54.382492 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:54.386632 systemd-logind[1451]: New session 6 of user core. Mar 19 11:27:54.394498 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:27:54.444303 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:27:54.444607 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:27:54.447572 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 19 11:27:54.452064 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:27:54.452622 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:27:54.475689 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:27:54.498197 augenrules[1639]: No rules Mar 19 11:27:54.499465 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:27:54.500457 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:27:54.501321 sudo[1616]: pam_unix(sudo:session): session closed for user root Mar 19 11:27:54.503697 sshd[1615]: Connection closed by 10.0.0.1 port 53572 Mar 19 11:27:54.503552 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Mar 19 11:27:54.508330 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:53572.service: Deactivated successfully. Mar 19 11:27:54.509750 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:27:54.510366 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:27:54.512068 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:53588.service - OpenSSH per-connection server daemon (10.0.0.1:53588). Mar 19 11:27:54.512679 systemd-logind[1451]: Removed session 6. Mar 19 11:27:54.553839 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:27:54.555038 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:27:54.559513 systemd-logind[1451]: New session 7 of user core. Mar 19 11:27:54.569510 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:27:54.619870 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:27:54.621088 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:27:54.958715 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:27:54.959673 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:27:55.206557 dockerd[1671]: time="2025-03-19T11:27:55.206492918Z" level=info msg="Starting up" Mar 19 11:27:55.359384 dockerd[1671]: time="2025-03-19T11:27:55.359338177Z" level=info msg="Loading containers: start." Mar 19 11:27:55.492380 kernel: Initializing XFRM netlink socket Mar 19 11:27:55.564833 systemd-networkd[1401]: docker0: Link UP Mar 19 11:27:55.592729 dockerd[1671]: time="2025-03-19T11:27:55.592663471Z" level=info msg="Loading containers: done." Mar 19 11:27:55.611336 dockerd[1671]: time="2025-03-19T11:27:55.610990879Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:27:55.611336 dockerd[1671]: time="2025-03-19T11:27:55.611082206Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:27:55.611336 dockerd[1671]: time="2025-03-19T11:27:55.611275411Z" level=info msg="Daemon has completed initialization" Mar 19 11:27:55.638382 dockerd[1671]: time="2025-03-19T11:27:55.638268632Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:27:55.638467 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:27:56.670682 containerd[1468]: time="2025-03-19T11:27:56.670633109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 19 11:27:57.362295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732504309.mount: Deactivated successfully. Mar 19 11:27:58.685217 containerd[1468]: time="2025-03-19T11:27:58.685156438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:27:58.686027 containerd[1468]: time="2025-03-19T11:27:58.685887294Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 19 11:27:58.689027 containerd[1468]: time="2025-03-19T11:27:58.688995059Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:27:58.692036 containerd[1468]: time="2025-03-19T11:27:58.692004503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:27:58.693149 containerd[1468]: time="2025-03-19T11:27:58.693119951Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.02244653s" Mar 19 11:27:58.693189 containerd[1468]: time="2025-03-19T11:27:58.693151693Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 19 11:27:58.694026 containerd[1468]: time="2025-03-19T11:27:58.693987456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 19 11:27:59.584705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:27:59.593597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:27:59.688043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:27:59.691150 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:27:59.732164 kubelet[1926]: E0319 11:27:59.732106 1926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:27:59.734508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:27:59.734642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:27:59.734893 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.9M memory peak. Mar 19 11:28:00.168727 containerd[1468]: time="2025-03-19T11:28:00.168568731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:00.169783 containerd[1468]: time="2025-03-19T11:28:00.169147932Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 19 11:28:00.169783 containerd[1468]: time="2025-03-19T11:28:00.169724150Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:00.173233 containerd[1468]: time="2025-03-19T11:28:00.173186472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:00.173963 containerd[1468]: time="2025-03-19T11:28:00.173934344Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.47990192s" Mar 19 11:28:00.173963 containerd[1468]: time="2025-03-19T11:28:00.173962171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 19 11:28:00.174598 containerd[1468]: time="2025-03-19T11:28:00.174384228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 19 11:28:01.557084 containerd[1468]: time="2025-03-19T11:28:01.557037425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:01.558091 containerd[1468]: time="2025-03-19T11:28:01.557848777Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 19 11:28:01.558862 containerd[1468]: time="2025-03-19T11:28:01.558817911Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:01.562032 containerd[1468]: time="2025-03-19T11:28:01.561984106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:01.563348 containerd[1468]: time="2025-03-19T11:28:01.563126817Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.388716782s" Mar 19 11:28:01.563348 containerd[1468]: time="2025-03-19T11:28:01.563162264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 19 11:28:01.563928 containerd[1468]: time="2025-03-19T11:28:01.563759260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 19 11:28:02.618748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367333449.mount: Deactivated successfully. Mar 19 11:28:02.838190 containerd[1468]: time="2025-03-19T11:28:02.838140505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:02.838954 containerd[1468]: time="2025-03-19T11:28:02.838911641Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 19 11:28:02.839642 containerd[1468]: time="2025-03-19T11:28:02.839596985Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:02.841874 containerd[1468]: time="2025-03-19T11:28:02.841819306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:02.842485 containerd[1468]: time="2025-03-19T11:28:02.842393418Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.278592169s" Mar 19 11:28:02.842485 containerd[1468]: time="2025-03-19T11:28:02.842433667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 19 11:28:02.843006 containerd[1468]: time="2025-03-19T11:28:02.842982698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:28:03.405687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912127978.mount: Deactivated successfully. Mar 19 11:28:04.096679 containerd[1468]: time="2025-03-19T11:28:04.096523940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:04.097674 containerd[1468]: time="2025-03-19T11:28:04.097632033Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 19 11:28:04.098287 containerd[1468]: time="2025-03-19T11:28:04.098254890Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:04.101729 containerd[1468]: time="2025-03-19T11:28:04.101692160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:04.102876 containerd[1468]: time="2025-03-19T11:28:04.102841862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.259827346s" Mar 19 11:28:04.102954 containerd[1468]: time="2025-03-19T11:28:04.102939667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 19 11:28:04.103440 containerd[1468]: time="2025-03-19T11:28:04.103408842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:28:04.568749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050335126.mount: Deactivated successfully. Mar 19 11:28:04.572543 containerd[1468]: time="2025-03-19T11:28:04.572206760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:04.573222 containerd[1468]: time="2025-03-19T11:28:04.573170020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 19 11:28:04.574015 containerd[1468]: time="2025-03-19T11:28:04.573971388Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:04.576530 containerd[1468]: time="2025-03-19T11:28:04.576481547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:04.578050 containerd[1468]: time="2025-03-19T11:28:04.578013938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 474.576679ms" Mar 19 11:28:04.578050 containerd[1468]: time="2025-03-19T11:28:04.578046500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:28:04.578593 containerd[1468]: time="2025-03-19T11:28:04.578575378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 19 11:28:05.109679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342528921.mount: Deactivated successfully. Mar 19 11:28:07.221464 containerd[1468]: time="2025-03-19T11:28:07.221416062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:07.221993 containerd[1468]: time="2025-03-19T11:28:07.221948089Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 19 11:28:07.222852 containerd[1468]: time="2025-03-19T11:28:07.222801414Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:07.226325 containerd[1468]: time="2025-03-19T11:28:07.226288894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:07.228035 containerd[1468]: time="2025-03-19T11:28:07.227994029Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.649395527s" Mar 19 11:28:07.228035 containerd[1468]: time="2025-03-19T11:28:07.228030500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 19 11:28:09.831093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:28:09.844545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:28:09.973959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:09.977105 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:28:10.008723 kubelet[2082]: E0319 11:28:10.008670 2082 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:28:10.011187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:28:10.011325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:28:10.011729 systemd[1]: kubelet.service: Consumed 116ms CPU time, 94.4M memory peak. Mar 19 11:28:12.678638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:12.679117 systemd[1]: kubelet.service: Consumed 116ms CPU time, 94.4M memory peak. Mar 19 11:28:12.690680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:28:12.710315 systemd[1]: Reload requested from client PID 2098 ('systemctl') (unit session-7.scope)... Mar 19 11:28:12.710332 systemd[1]: Reloading... Mar 19 11:28:12.778501 zram_generator::config[2140]: No configuration found. Mar 19 11:28:12.988476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:28:13.058993 systemd[1]: Reloading finished in 348 ms. Mar 19 11:28:13.091459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:13.093790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:28:13.095079 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:28:13.095312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:13.095351 systemd[1]: kubelet.service: Consumed 77ms CPU time, 82.3M memory peak. Mar 19 11:28:13.096870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:28:13.196257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:13.201042 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:28:13.235185 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:28:13.235185 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:28:13.235185 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:28:13.235540 kubelet[2189]: I0319 11:28:13.235490 2189 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:28:13.835381 kubelet[2189]: I0319 11:28:13.835328 2189 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:28:13.835381 kubelet[2189]: I0319 11:28:13.835376 2189 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:28:13.835669 kubelet[2189]: I0319 11:28:13.835640 2189 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:28:13.889624 kubelet[2189]: E0319 11:28:13.889568 2189 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:13.890471 kubelet[2189]: I0319 11:28:13.890427 2189 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:28:13.899319 kubelet[2189]: E0319 11:28:13.899287 2189 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:28:13.899319 kubelet[2189]: I0319 11:28:13.899316 2189 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:28:13.902831 kubelet[2189]: I0319 11:28:13.902802 2189 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:28:13.906952 kubelet[2189]: I0319 11:28:13.906919 2189 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:28:13.907112 kubelet[2189]: I0319 11:28:13.907078 2189 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:28:13.907271 kubelet[2189]: I0319 11:28:13.907102 2189 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:28:13.907441 kubelet[2189]: I0319 11:28:13.907419 2189 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:28:13.907441 kubelet[2189]: I0319 11:28:13.907435 2189 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:28:13.907641 kubelet[2189]: I0319 11:28:13.907618 2189 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:28:13.909197 kubelet[2189]: I0319 11:28:13.909171 2189 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:28:13.909240 kubelet[2189]: I0319 11:28:13.909200 2189 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:28:13.909295 kubelet[2189]: I0319 11:28:13.909284 2189 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:28:13.909322 kubelet[2189]: I0319 11:28:13.909298 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:28:13.911160 kubelet[2189]: I0319 11:28:13.911132 2189 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:28:13.913278 kubelet[2189]: I0319 11:28:13.913247 2189 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:28:13.916681 kubelet[2189]: W0319 11:28:13.916574 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:13.916681 kubelet[2189]: E0319 11:28:13.916638 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:13.916681 kubelet[2189]: W0319 11:28:13.916652 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:28:13.916893 kubelet[2189]: W0319 11:28:13.916861 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:13.916978 kubelet[2189]: E0319 11:28:13.916962 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:13.917605 kubelet[2189]: I0319 11:28:13.917576 2189 server.go:1269] "Started kubelet" Mar 19 11:28:13.918128 kubelet[2189]: I0319 11:28:13.918056 2189 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:28:13.921553 kubelet[2189]: I0319 11:28:13.921036 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:28:13.921553 kubelet[2189]: I0319 11:28:13.921331 2189 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:28:13.922268 kubelet[2189]: I0319 11:28:13.922116 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:28:13.922618 kubelet[2189]: I0319 11:28:13.922575 2189 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:28:13.923909 kubelet[2189]: I0319 11:28:13.923453 2189 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:28:13.924466 kubelet[2189]: E0319 11:28:13.922640 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e30bcf26389a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:28:13.917555111 +0000 UTC m=+0.713627454,LastTimestamp:2025-03-19 11:28:13.917555111 +0000 UTC m=+0.713627454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:28:13.925004 kubelet[2189]: E0319 11:28:13.924977 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:28:13.925538 kubelet[2189]: I0319 11:28:13.925159 2189 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:28:13.925587 kubelet[2189]: E0319 11:28:13.925564 2189 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:28:13.925587 kubelet[2189]: W0319 11:28:13.925482 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:13.925659 kubelet[2189]: E0319 11:28:13.925597 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:13.925659 kubelet[2189]: E0319 11:28:13.925474 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Mar 19 11:28:13.925697 kubelet[2189]: I0319 11:28:13.925621 2189 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:28:13.925697 kubelet[2189]: I0319 11:28:13.925685 2189 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:28:13.925787 kubelet[2189]: I0319 11:28:13.925773 2189 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:28:13.925823 kubelet[2189]: I0319 11:28:13.925776 2189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:28:13.927263 kubelet[2189]: I0319 11:28:13.927228 2189 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:28:13.937183 kubelet[2189]: I0319 11:28:13.936976 2189 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:28:13.937183 kubelet[2189]: I0319 11:28:13.936990 2189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:28:13.937183 kubelet[2189]: I0319 11:28:13.937006 2189 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:28:13.938453 kubelet[2189]: I0319 11:28:13.938412 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:28:13.939450 kubelet[2189]: I0319 11:28:13.939351 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:28:13.939450 kubelet[2189]: I0319 11:28:13.939445 2189 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:28:13.939533 kubelet[2189]: I0319 11:28:13.939460 2189 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:28:13.939533 kubelet[2189]: E0319 11:28:13.939503 2189 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:28:13.945794 kubelet[2189]: W0319 11:28:13.945730 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:13.945794 kubelet[2189]: E0319 11:28:13.945790 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:13.999599 kubelet[2189]: I0319 11:28:13.999560 2189 policy_none.go:49] "None policy: Start" Mar 19 11:28:14.000263 kubelet[2189]: I0319 11:28:14.000247 2189 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:28:14.000331 kubelet[2189]: I0319 11:28:14.000273 2189 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:28:14.006770 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:28:14.020797 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:28:14.023876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:28:14.025498 kubelet[2189]: E0319 11:28:14.025473 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:28:14.034208 kubelet[2189]: I0319 11:28:14.034185 2189 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:28:14.034449 kubelet[2189]: I0319 11:28:14.034392 2189 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:28:14.034449 kubelet[2189]: I0319 11:28:14.034417 2189 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:28:14.034885 kubelet[2189]: I0319 11:28:14.034692 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:28:14.035929 kubelet[2189]: E0319 11:28:14.035906 2189 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 19 11:28:14.046663 systemd[1]: Created slice kubepods-burstable-pod170376581c3ffc336de51e269cc0c1eb.slice - libcontainer container kubepods-burstable-pod170376581c3ffc336de51e269cc0c1eb.slice. Mar 19 11:28:14.070081 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 19 11:28:14.072987 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 19 11:28:14.126854 kubelet[2189]: I0319 11:28:14.126654 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/170376581c3ffc336de51e269cc0c1eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"170376581c3ffc336de51e269cc0c1eb\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:28:14.126854 kubelet[2189]: I0319 11:28:14.126690 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/170376581c3ffc336de51e269cc0c1eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"170376581c3ffc336de51e269cc0c1eb\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:28:14.126854 kubelet[2189]: I0319 11:28:14.126710 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/170376581c3ffc336de51e269cc0c1eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"170376581c3ffc336de51e269cc0c1eb\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:28:14.126854 kubelet[2189]: I0319 11:28:14.126727 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:14.126854 kubelet[2189]: I0319 11:28:14.126742 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:14.127027 kubelet[2189]: I0319 11:28:14.126756 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:14.127027 kubelet[2189]: I0319 11:28:14.126795 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:14.127027 kubelet[2189]: I0319 11:28:14.126832 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:28:14.127257 kubelet[2189]: E0319 11:28:14.127209 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Mar 19 11:28:14.135751 kubelet[2189]: I0319 11:28:14.135732 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:28:14.136304 kubelet[2189]: E0319 11:28:14.136255 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 19 11:28:14.228082 kubelet[2189]: I0319 11:28:14.227156 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:14.338235 kubelet[2189]: I0319 11:28:14.337679 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:28:14.340173 kubelet[2189]: E0319 11:28:14.340136 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 19 11:28:14.367772 containerd[1468]: time="2025-03-19T11:28:14.367717704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:170376581c3ffc336de51e269cc0c1eb,Namespace:kube-system,Attempt:0,}" Mar 19 11:28:14.374047 containerd[1468]: time="2025-03-19T11:28:14.374015820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 19 11:28:14.374888 containerd[1468]: time="2025-03-19T11:28:14.374865250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 19 11:28:14.527697 kubelet[2189]: E0319 11:28:14.527567 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Mar 19 11:28:14.741611 kubelet[2189]: I0319 11:28:14.741571 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:28:14.742128 kubelet[2189]: E0319 11:28:14.742088 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 19 11:28:14.897553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067830577.mount: Deactivated successfully. Mar 19 11:28:14.902083 containerd[1468]: time="2025-03-19T11:28:14.901949972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:28:14.904186 containerd[1468]: time="2025-03-19T11:28:14.904130693Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 19 11:28:14.904896 containerd[1468]: time="2025-03-19T11:28:14.904847010Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:28:14.906043 containerd[1468]: time="2025-03-19T11:28:14.906010661Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:28:14.906504 containerd[1468]: time="2025-03-19T11:28:14.906421429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:28:14.907838 containerd[1468]: time="2025-03-19T11:28:14.907752080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:28:14.908075 containerd[1468]: time="2025-03-19T11:28:14.908046719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:28:14.911795 containerd[1468]: time="2025-03-19T11:28:14.911709068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:28:14.913108 containerd[1468]: time="2025-03-19T11:28:14.912617242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.700402ms" Mar 19 11:28:14.914345 containerd[1468]: time="2025-03-19T11:28:14.914263832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.466963ms" Mar 19 11:28:14.916049 containerd[1468]: time="2025-03-19T11:28:14.916025672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.941478ms" Mar 19 11:28:15.017878 kubelet[2189]: W0319 11:28:15.017803 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:15.017878 kubelet[2189]: E0319 11:28:15.017878 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:15.078786 containerd[1468]: time="2025-03-19T11:28:15.078666705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:15.079056 containerd[1468]: time="2025-03-19T11:28:15.078802272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:15.079056 containerd[1468]: time="2025-03-19T11:28:15.078822096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:15.079056 containerd[1468]: time="2025-03-19T11:28:15.078901829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:15.079354 containerd[1468]: time="2025-03-19T11:28:15.078048941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:15.079354 containerd[1468]: time="2025-03-19T11:28:15.079239268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:15.079354 containerd[1468]: time="2025-03-19T11:28:15.079252297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:15.079354 containerd[1468]: time="2025-03-19T11:28:15.079325156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:15.079896 containerd[1468]: time="2025-03-19T11:28:15.079743607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:15.079896 containerd[1468]: time="2025-03-19T11:28:15.079824819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:15.079896 containerd[1468]: time="2025-03-19T11:28:15.079836929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:15.080122 containerd[1468]: time="2025-03-19T11:28:15.080020976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:15.101508 systemd[1]: Started cri-containerd-217a35c0600c1c5749c79524ce132c66c2d8b356a05bb25ee3f02c3e7b96fd4a.scope - libcontainer container 217a35c0600c1c5749c79524ce132c66c2d8b356a05bb25ee3f02c3e7b96fd4a. Mar 19 11:28:15.102599 systemd[1]: Started cri-containerd-c3e50a40f594c371b129c98b8e211bdf18cf32026ac9f2c0093b525487ad0a8f.scope - libcontainer container c3e50a40f594c371b129c98b8e211bdf18cf32026ac9f2c0093b525487ad0a8f. Mar 19 11:28:15.103575 systemd[1]: Started cri-containerd-c46e0f2e35dafa9a18edb023fee75f5fa617985db4e7aa95bf9836fba531df1c.scope - libcontainer container c46e0f2e35dafa9a18edb023fee75f5fa617985db4e7aa95bf9836fba531df1c. Mar 19 11:28:15.134115 containerd[1468]: time="2025-03-19T11:28:15.133111132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:170376581c3ffc336de51e269cc0c1eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"217a35c0600c1c5749c79524ce132c66c2d8b356a05bb25ee3f02c3e7b96fd4a\"" Mar 19 11:28:15.136040 containerd[1468]: time="2025-03-19T11:28:15.136001801Z" level=info msg="CreateContainer within sandbox \"217a35c0600c1c5749c79524ce132c66c2d8b356a05bb25ee3f02c3e7b96fd4a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:28:15.140198 containerd[1468]: time="2025-03-19T11:28:15.140079760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c46e0f2e35dafa9a18edb023fee75f5fa617985db4e7aa95bf9836fba531df1c\"" Mar 19 11:28:15.143085 containerd[1468]: time="2025-03-19T11:28:15.143052240Z" level=info msg="CreateContainer within sandbox \"c46e0f2e35dafa9a18edb023fee75f5fa617985db4e7aa95bf9836fba531df1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:28:15.143568 containerd[1468]: time="2025-03-19T11:28:15.143514535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3e50a40f594c371b129c98b8e211bdf18cf32026ac9f2c0093b525487ad0a8f\"" Mar 19 11:28:15.145651 containerd[1468]: time="2025-03-19T11:28:15.145521341Z" level=info msg="CreateContainer within sandbox \"c3e50a40f594c371b129c98b8e211bdf18cf32026ac9f2c0093b525487ad0a8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:28:15.154511 containerd[1468]: time="2025-03-19T11:28:15.154430429Z" level=info msg="CreateContainer within sandbox \"217a35c0600c1c5749c79524ce132c66c2d8b356a05bb25ee3f02c3e7b96fd4a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2b667beb0d111172f2873901d7442ca09d9722e8c65dd99c0fa1c4a82bd8f51\"" Mar 19 11:28:15.155290 containerd[1468]: time="2025-03-19T11:28:15.155246309Z" level=info msg="StartContainer for \"f2b667beb0d111172f2873901d7442ca09d9722e8c65dd99c0fa1c4a82bd8f51\"" Mar 19 11:28:15.162800 containerd[1468]: time="2025-03-19T11:28:15.162756644Z" level=info msg="CreateContainer within sandbox \"c46e0f2e35dafa9a18edb023fee75f5fa617985db4e7aa95bf9836fba531df1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61a3a8a522f5092d2e25f20262deb55db2dc79d97d2a48df9c5eb4604f40fc27\"" Mar 19 11:28:15.163338 containerd[1468]: time="2025-03-19T11:28:15.163287362Z" level=info msg="StartContainer for \"61a3a8a522f5092d2e25f20262deb55db2dc79d97d2a48df9c5eb4604f40fc27\"" Mar 19 11:28:15.164711 containerd[1468]: time="2025-03-19T11:28:15.164637835Z" level=info msg="CreateContainer within sandbox \"c3e50a40f594c371b129c98b8e211bdf18cf32026ac9f2c0093b525487ad0a8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4baa744d56ac4e0297b4aee6f3aba67241dd1f058b963310384b6d19ee474458\"" Mar 19 11:28:15.165637 containerd[1468]: time="2025-03-19T11:28:15.165083024Z" level=info msg="StartContainer for \"4baa744d56ac4e0297b4aee6f3aba67241dd1f058b963310384b6d19ee474458\"" Mar 19 11:28:15.180536 systemd[1]: Started cri-containerd-f2b667beb0d111172f2873901d7442ca09d9722e8c65dd99c0fa1c4a82bd8f51.scope - libcontainer container f2b667beb0d111172f2873901d7442ca09d9722e8c65dd99c0fa1c4a82bd8f51. Mar 19 11:28:15.183229 systemd[1]: Started cri-containerd-61a3a8a522f5092d2e25f20262deb55db2dc79d97d2a48df9c5eb4604f40fc27.scope - libcontainer container 61a3a8a522f5092d2e25f20262deb55db2dc79d97d2a48df9c5eb4604f40fc27. Mar 19 11:28:15.186834 systemd[1]: Started cri-containerd-4baa744d56ac4e0297b4aee6f3aba67241dd1f058b963310384b6d19ee474458.scope - libcontainer container 4baa744d56ac4e0297b4aee6f3aba67241dd1f058b963310384b6d19ee474458. Mar 19 11:28:15.218546 containerd[1468]: time="2025-03-19T11:28:15.218510859Z" level=info msg="StartContainer for \"f2b667beb0d111172f2873901d7442ca09d9722e8c65dd99c0fa1c4a82bd8f51\" returns successfully" Mar 19 11:28:15.239836 containerd[1468]: time="2025-03-19T11:28:15.239694269Z" level=info msg="StartContainer for \"61a3a8a522f5092d2e25f20262deb55db2dc79d97d2a48df9c5eb4604f40fc27\" returns successfully" Mar 19 11:28:15.239836 containerd[1468]: time="2025-03-19T11:28:15.239768487Z" level=info msg="StartContainer for \"4baa744d56ac4e0297b4aee6f3aba67241dd1f058b963310384b6d19ee474458\" returns successfully" Mar 19 11:28:15.314315 kubelet[2189]: W0319 11:28:15.313591 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:15.314315 kubelet[2189]: E0319 11:28:15.313656 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:15.328353 kubelet[2189]: E0319 11:28:15.328304 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Mar 19 11:28:15.332753 kubelet[2189]: W0319 11:28:15.332649 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Mar 19 11:28:15.332753 kubelet[2189]: E0319 11:28:15.332709 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:28:15.543629 kubelet[2189]: I0319 11:28:15.543237 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:28:17.270868 kubelet[2189]: E0319 11:28:17.270836 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 19 11:28:17.457891 kubelet[2189]: I0319 11:28:17.457309 2189 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 19 11:28:17.913686 kubelet[2189]: I0319 11:28:17.913413 2189 apiserver.go:52] "Watching apiserver" Mar 19 11:28:17.926748 kubelet[2189]: I0319 11:28:17.926705 2189 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:28:18.202928 kubelet[2189]: E0319 11:28:18.202828 2189 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 19 11:28:19.160633 systemd[1]: Reload requested from client PID 2464 ('systemctl') (unit session-7.scope)... Mar 19 11:28:19.160648 systemd[1]: Reloading... Mar 19 11:28:19.233394 zram_generator::config[2511]: No configuration found. Mar 19 11:28:19.310331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:28:19.391688 systemd[1]: Reloading finished in 230 ms. Mar 19 11:28:19.415205 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:28:19.426705 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:28:19.426909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:19.426952 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 119.8M memory peak. Mar 19 11:28:19.438614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:28:19.531374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:28:19.535708 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:28:19.572485 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:28:19.572485 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:28:19.572485 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:28:19.572791 kubelet[2550]: I0319 11:28:19.572541 2550 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:28:19.578242 kubelet[2550]: I0319 11:28:19.578211 2550 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:28:19.578242 kubelet[2550]: I0319 11:28:19.578235 2550 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:28:19.578475 kubelet[2550]: I0319 11:28:19.578453 2550 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:28:19.579712 kubelet[2550]: I0319 11:28:19.579686 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:28:19.581617 kubelet[2550]: I0319 11:28:19.581528 2550 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:28:19.584194 kubelet[2550]: E0319 11:28:19.584166 2550 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:28:19.584284 kubelet[2550]: I0319 11:28:19.584272 2550 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:28:19.586808 kubelet[2550]: I0319 11:28:19.586787 2550 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:28:19.586919 kubelet[2550]: I0319 11:28:19.586905 2550 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:28:19.587029 kubelet[2550]: I0319 11:28:19.587005 2550 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:28:19.587189 kubelet[2550]: I0319 11:28:19.587030 2550 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:28:19.587254 kubelet[2550]: I0319 11:28:19.587200 2550 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:28:19.587254 kubelet[2550]: I0319 11:28:19.587209 2550 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:28:19.587254 kubelet[2550]: I0319 11:28:19.587238 2550 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:28:19.587349 kubelet[2550]: I0319 11:28:19.587335 2550 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:28:19.587406 kubelet[2550]: I0319 11:28:19.587354 2550 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:28:19.587406 kubelet[2550]: I0319 11:28:19.587399 2550 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:28:19.587449 kubelet[2550]: I0319 11:28:19.587409 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:28:19.589249 kubelet[2550]: I0319 11:28:19.588972 2550 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:28:19.589893 kubelet[2550]: I0319 11:28:19.589858 2550 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:28:19.590301 kubelet[2550]: I0319 11:28:19.590277 2550 server.go:1269] "Started kubelet" Mar 19 11:28:19.590746 kubelet[2550]: I0319 11:28:19.590700 2550 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:28:19.591250 kubelet[2550]: I0319 11:28:19.591188 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:28:19.591530 kubelet[2550]: I0319 11:28:19.591509 2550 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:28:19.592077 kubelet[2550]: I0319 11:28:19.592041 2550 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:28:19.593101 kubelet[2550]: I0319 11:28:19.593073 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:28:19.594082 kubelet[2550]: I0319 11:28:19.594061 2550 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:28:19.594633 kubelet[2550]: I0319 11:28:19.594619 2550 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:28:19.594878 kubelet[2550]: E0319 11:28:19.594857 2550 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:28:19.595154 kubelet[2550]: I0319 11:28:19.595135 2550 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:28:19.595324 kubelet[2550]: I0319 11:28:19.595313 2550 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:28:19.599373 kubelet[2550]: I0319 11:28:19.596558 2550 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:28:19.599373 kubelet[2550]: I0319 11:28:19.596654 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:28:19.599747 kubelet[2550]: E0319 11:28:19.599715 2550 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:28:19.604979 kubelet[2550]: I0319 11:28:19.604942 2550 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:28:19.623770 kubelet[2550]: I0319 11:28:19.623743 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:28:19.625267 kubelet[2550]: I0319 11:28:19.625249 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:28:19.625379 kubelet[2550]: I0319 11:28:19.625348 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:28:19.625454 kubelet[2550]: I0319 11:28:19.625443 2550 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:28:19.625541 kubelet[2550]: E0319 11:28:19.625525 2550 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:28:19.649956 kubelet[2550]: I0319 11:28:19.649735 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:28:19.649956 kubelet[2550]: I0319 11:28:19.649751 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:28:19.649956 kubelet[2550]: I0319 11:28:19.649768 2550 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:28:19.649956 kubelet[2550]: I0319 11:28:19.649886 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:28:19.649956 kubelet[2550]: I0319 11:28:19.649897 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:28:19.649956 kubelet[2550]: I0319 11:28:19.649913 2550 policy_none.go:49] "None policy: Start" Mar 19 11:28:19.651067 kubelet[2550]: I0319 11:28:19.650790 2550 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:28:19.651067 kubelet[2550]: I0319 11:28:19.650818 2550 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:28:19.651067 kubelet[2550]: I0319 11:28:19.650964 2550 state_mem.go:75] "Updated machine memory state" Mar 19 11:28:19.654503 kubelet[2550]: I0319 11:28:19.654480 2550 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:28:19.654656 kubelet[2550]: I0319 11:28:19.654638 2550 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:28:19.654685 kubelet[2550]: I0319 11:28:19.654656 2550 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:28:19.654859 kubelet[2550]: I0319 11:28:19.654842 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:28:19.756966 kubelet[2550]: I0319 11:28:19.756863 2550 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:28:19.762787 kubelet[2550]: I0319 11:28:19.762736 2550 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 19 11:28:19.762876 kubelet[2550]: I0319 11:28:19.762807 2550 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 19 11:28:19.896096 kubelet[2550]: I0319 11:28:19.896040 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/170376581c3ffc336de51e269cc0c1eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"170376581c3ffc336de51e269cc0c1eb\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:28:19.896096 kubelet[2550]: I0319 11:28:19.896073 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:19.896096 kubelet[2550]: I0319 11:28:19.896091 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:28:19.896096 kubelet[2550]: I0319 11:28:19.896105 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/170376581c3ffc336de51e269cc0c1eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"170376581c3ffc336de51e269cc0c1eb\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:28:19.896406 kubelet[2550]: I0319 11:28:19.896122 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/170376581c3ffc336de51e269cc0c1eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"170376581c3ffc336de51e269cc0c1eb\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:28:19.896406 kubelet[2550]: I0319 11:28:19.896164 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:19.896406 kubelet[2550]: I0319 11:28:19.896194 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:19.896406 kubelet[2550]: I0319 11:28:19.896211 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:19.896406 kubelet[2550]: I0319 11:28:19.896233 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:28:20.587996 kubelet[2550]: I0319 11:28:20.587928 2550 apiserver.go:52] "Watching apiserver" Mar 19 11:28:20.595492 kubelet[2550]: I0319 11:28:20.595454 2550 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:28:20.650400 kubelet[2550]: I0319 11:28:20.650315 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.650302163 podStartE2EDuration="1.650302163s" podCreationTimestamp="2025-03-19 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:28:20.650266241 +0000 UTC m=+1.111210140" watchObservedRunningTime="2025-03-19 11:28:20.650302163 +0000 UTC m=+1.111246062" Mar 19 11:28:20.657953 kubelet[2550]: I0319 11:28:20.657907 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.65789612 podStartE2EDuration="1.65789612s" podCreationTimestamp="2025-03-19 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:28:20.657816074 +0000 UTC m=+1.118759973" watchObservedRunningTime="2025-03-19 11:28:20.65789612 +0000 UTC m=+1.118839979" Mar 19 11:28:20.666407 kubelet[2550]: I0319 11:28:20.665084 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6650712479999998 podStartE2EDuration="1.665071248s" podCreationTimestamp="2025-03-19 11:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:28:20.664269433 +0000 UTC m=+1.125213332" watchObservedRunningTime="2025-03-19 11:28:20.665071248 +0000 UTC m=+1.126015147" Mar 19 11:28:24.548572 sudo[1651]: pam_unix(sudo:session): session closed for user root Mar 19 11:28:24.549934 sshd[1650]: Connection closed by 10.0.0.1 port 53588 Mar 19 11:28:24.550283 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Mar 19 11:28:24.553628 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:53588.service: Deactivated successfully. Mar 19 11:28:24.555497 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:28:24.555678 systemd[1]: session-7.scope: Consumed 7.129s CPU time, 220.4M memory peak. Mar 19 11:28:24.556647 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:28:24.557664 systemd-logind[1451]: Removed session 7. Mar 19 11:28:24.784048 kubelet[2550]: I0319 11:28:24.783995 2550 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:28:24.784702 kubelet[2550]: I0319 11:28:24.784611 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:28:24.784766 containerd[1468]: time="2025-03-19T11:28:24.784404039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:28:25.348343 systemd[1]: Created slice kubepods-besteffort-podaa0f5ab9_d242_4a93_b3d6_0652051c9308.slice - libcontainer container kubepods-besteffort-podaa0f5ab9_d242_4a93_b3d6_0652051c9308.slice. Mar 19 11:28:25.435774 kubelet[2550]: I0319 11:28:25.435424 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa0f5ab9-d242-4a93-b3d6-0652051c9308-kube-proxy\") pod \"kube-proxy-t2qw6\" (UID: \"aa0f5ab9-d242-4a93-b3d6-0652051c9308\") " pod="kube-system/kube-proxy-t2qw6" Mar 19 11:28:25.435774 kubelet[2550]: I0319 11:28:25.435484 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa0f5ab9-d242-4a93-b3d6-0652051c9308-lib-modules\") pod \"kube-proxy-t2qw6\" (UID: \"aa0f5ab9-d242-4a93-b3d6-0652051c9308\") " pod="kube-system/kube-proxy-t2qw6" Mar 19 11:28:25.435774 kubelet[2550]: I0319 11:28:25.435500 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa0f5ab9-d242-4a93-b3d6-0652051c9308-xtables-lock\") pod \"kube-proxy-t2qw6\" (UID: \"aa0f5ab9-d242-4a93-b3d6-0652051c9308\") " pod="kube-system/kube-proxy-t2qw6" Mar 19 11:28:25.435774 kubelet[2550]: I0319 11:28:25.435518 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkjl\" (UniqueName: \"kubernetes.io/projected/aa0f5ab9-d242-4a93-b3d6-0652051c9308-kube-api-access-6tkjl\") pod \"kube-proxy-t2qw6\" (UID: \"aa0f5ab9-d242-4a93-b3d6-0652051c9308\") " pod="kube-system/kube-proxy-t2qw6" Mar 19 11:28:25.558778 kubelet[2550]: E0319 11:28:25.558740 2550 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 19 11:28:25.558778 kubelet[2550]: E0319 11:28:25.558778 2550 projected.go:194] Error preparing data for projected volume kube-api-access-6tkjl for pod kube-system/kube-proxy-t2qw6: configmap "kube-root-ca.crt" not found Mar 19 11:28:25.558945 kubelet[2550]: E0319 11:28:25.558850 2550 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa0f5ab9-d242-4a93-b3d6-0652051c9308-kube-api-access-6tkjl podName:aa0f5ab9-d242-4a93-b3d6-0652051c9308 nodeName:}" failed. No retries permitted until 2025-03-19 11:28:26.058823287 +0000 UTC m=+6.519767146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6tkjl" (UniqueName: "kubernetes.io/projected/aa0f5ab9-d242-4a93-b3d6-0652051c9308-kube-api-access-6tkjl") pod "kube-proxy-t2qw6" (UID: "aa0f5ab9-d242-4a93-b3d6-0652051c9308") : configmap "kube-root-ca.crt" not found Mar 19 11:28:25.860155 systemd[1]: Created slice kubepods-besteffort-pod47002416_4598_4be3_88bb_536e2224499d.slice - libcontainer container kubepods-besteffort-pod47002416_4598_4be3_88bb_536e2224499d.slice. Mar 19 11:28:25.938518 kubelet[2550]: I0319 11:28:25.938475 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqzp\" (UniqueName: \"kubernetes.io/projected/47002416-4598-4be3-88bb-536e2224499d-kube-api-access-mjqzp\") pod \"tigera-operator-64ff5465b7-lljzt\" (UID: \"47002416-4598-4be3-88bb-536e2224499d\") " pod="tigera-operator/tigera-operator-64ff5465b7-lljzt" Mar 19 11:28:25.938518 kubelet[2550]: I0319 11:28:25.938517 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47002416-4598-4be3-88bb-536e2224499d-var-lib-calico\") pod \"tigera-operator-64ff5465b7-lljzt\" (UID: \"47002416-4598-4be3-88bb-536e2224499d\") " pod="tigera-operator/tigera-operator-64ff5465b7-lljzt" Mar 19 11:28:26.163991 containerd[1468]: time="2025-03-19T11:28:26.163701275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-lljzt,Uid:47002416-4598-4be3-88bb-536e2224499d,Namespace:tigera-operator,Attempt:0,}" Mar 19 11:28:26.191071 containerd[1468]: time="2025-03-19T11:28:26.190869275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:26.191071 containerd[1468]: time="2025-03-19T11:28:26.190914037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:26.191071 containerd[1468]: time="2025-03-19T11:28:26.190924958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:26.191071 containerd[1468]: time="2025-03-19T11:28:26.190984401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:26.217514 systemd[1]: Started cri-containerd-6680e43da720c2627873e23981b220dcdc30bbd1ba9262fff3c9aad37e8c475f.scope - libcontainer container 6680e43da720c2627873e23981b220dcdc30bbd1ba9262fff3c9aad37e8c475f. Mar 19 11:28:26.247283 containerd[1468]: time="2025-03-19T11:28:26.247140610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-lljzt,Uid:47002416-4598-4be3-88bb-536e2224499d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6680e43da720c2627873e23981b220dcdc30bbd1ba9262fff3c9aad37e8c475f\"" Mar 19 11:28:26.250895 containerd[1468]: time="2025-03-19T11:28:26.250547255Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 19 11:28:26.265747 containerd[1468]: time="2025-03-19T11:28:26.265288972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2qw6,Uid:aa0f5ab9-d242-4a93-b3d6-0652051c9308,Namespace:kube-system,Attempt:0,}" Mar 19 11:28:26.284634 containerd[1468]: time="2025-03-19T11:28:26.283189082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:26.284634 containerd[1468]: time="2025-03-19T11:28:26.283244925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:26.284634 containerd[1468]: time="2025-03-19T11:28:26.283256205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:26.284634 containerd[1468]: time="2025-03-19T11:28:26.283338289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:26.303624 systemd[1]: Started cri-containerd-3e878795947438afbf8b04863477c53fc42343c027c51306431f3f1aa1986aec.scope - libcontainer container 3e878795947438afbf8b04863477c53fc42343c027c51306431f3f1aa1986aec. Mar 19 11:28:26.333924 containerd[1468]: time="2025-03-19T11:28:26.333895946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2qw6,Uid:aa0f5ab9-d242-4a93-b3d6-0652051c9308,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e878795947438afbf8b04863477c53fc42343c027c51306431f3f1aa1986aec\"" Mar 19 11:28:26.336496 containerd[1468]: time="2025-03-19T11:28:26.336334745Z" level=info msg="CreateContainer within sandbox \"3e878795947438afbf8b04863477c53fc42343c027c51306431f3f1aa1986aec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:28:26.368031 containerd[1468]: time="2025-03-19T11:28:26.367969482Z" level=info msg="CreateContainer within sandbox \"3e878795947438afbf8b04863477c53fc42343c027c51306431f3f1aa1986aec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"033ed7e99f9b1c9d3df966c4520703b1354805164b2af23d91bc482a06283d6c\"" Mar 19 11:28:26.369006 containerd[1468]: time="2025-03-19T11:28:26.368472186Z" level=info msg="StartContainer for \"033ed7e99f9b1c9d3df966c4520703b1354805164b2af23d91bc482a06283d6c\"" Mar 19 11:28:26.393506 systemd[1]: Started cri-containerd-033ed7e99f9b1c9d3df966c4520703b1354805164b2af23d91bc482a06283d6c.scope - libcontainer container 033ed7e99f9b1c9d3df966c4520703b1354805164b2af23d91bc482a06283d6c. Mar 19 11:28:26.418148 containerd[1468]: time="2025-03-19T11:28:26.418057596Z" level=info msg="StartContainer for \"033ed7e99f9b1c9d3df966c4520703b1354805164b2af23d91bc482a06283d6c\" returns successfully" Mar 19 11:28:26.669916 kubelet[2550]: I0319 11:28:26.669695 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2qw6" podStartSLOduration=1.669679705 podStartE2EDuration="1.669679705s" podCreationTimestamp="2025-03-19 11:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:28:26.662722167 +0000 UTC m=+7.123666066" watchObservedRunningTime="2025-03-19 11:28:26.669679705 +0000 UTC m=+7.130623604" Mar 19 11:28:27.692470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241621417.mount: Deactivated successfully. Mar 19 11:28:28.235348 containerd[1468]: time="2025-03-19T11:28:28.235293900Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:28.235753 containerd[1468]: time="2025-03-19T11:28:28.235696598Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 19 11:28:28.238267 containerd[1468]: time="2025-03-19T11:28:28.238232229Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:28.240349 containerd[1468]: time="2025-03-19T11:28:28.240310879Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:28.241476 containerd[1468]: time="2025-03-19T11:28:28.241441489Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 1.990863031s" Mar 19 11:28:28.241517 containerd[1468]: time="2025-03-19T11:28:28.241476850Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 19 11:28:28.244868 containerd[1468]: time="2025-03-19T11:28:28.244828116Z" level=info msg="CreateContainer within sandbox \"6680e43da720c2627873e23981b220dcdc30bbd1ba9262fff3c9aad37e8c475f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 19 11:28:28.254764 containerd[1468]: time="2025-03-19T11:28:28.254643345Z" level=info msg="CreateContainer within sandbox \"6680e43da720c2627873e23981b220dcdc30bbd1ba9262fff3c9aad37e8c475f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"26887959594302c5200d68da6cbec210f33ef80f3473bc3ef03f67a0443b1db6\"" Mar 19 11:28:28.255298 containerd[1468]: time="2025-03-19T11:28:28.255261092Z" level=info msg="StartContainer for \"26887959594302c5200d68da6cbec210f33ef80f3473bc3ef03f67a0443b1db6\"" Mar 19 11:28:28.295566 systemd[1]: Started cri-containerd-26887959594302c5200d68da6cbec210f33ef80f3473bc3ef03f67a0443b1db6.scope - libcontainer container 26887959594302c5200d68da6cbec210f33ef80f3473bc3ef03f67a0443b1db6. Mar 19 11:28:28.326654 containerd[1468]: time="2025-03-19T11:28:28.326609444Z" level=info msg="StartContainer for \"26887959594302c5200d68da6cbec210f33ef80f3473bc3ef03f67a0443b1db6\" returns successfully" Mar 19 11:28:28.667793 kubelet[2550]: I0319 11:28:28.667734 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-lljzt" podStartSLOduration=1.669415681 podStartE2EDuration="3.664273976s" podCreationTimestamp="2025-03-19 11:28:25 +0000 UTC" firstStartedPulling="2025-03-19 11:28:26.248681925 +0000 UTC m=+6.709625824" lastFinishedPulling="2025-03-19 11:28:28.24354022 +0000 UTC m=+8.704484119" observedRunningTime="2025-03-19 11:28:28.664189653 +0000 UTC m=+9.125133552" watchObservedRunningTime="2025-03-19 11:28:28.664273976 +0000 UTC m=+9.125217835" Mar 19 11:28:32.230294 systemd[1]: Created slice kubepods-besteffort-pod159b8a65_cf3e_43b9_82da_4fbdd241576a.slice - libcontainer container kubepods-besteffort-pod159b8a65_cf3e_43b9_82da_4fbdd241576a.slice. Mar 19 11:28:32.279266 kubelet[2550]: I0319 11:28:32.279225 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/159b8a65-cf3e-43b9-82da-4fbdd241576a-typha-certs\") pod \"calico-typha-644876d856-jtttc\" (UID: \"159b8a65-cf3e-43b9-82da-4fbdd241576a\") " pod="calico-system/calico-typha-644876d856-jtttc" Mar 19 11:28:32.279266 kubelet[2550]: I0319 11:28:32.279270 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkrd\" (UniqueName: \"kubernetes.io/projected/159b8a65-cf3e-43b9-82da-4fbdd241576a-kube-api-access-lnkrd\") pod \"calico-typha-644876d856-jtttc\" (UID: \"159b8a65-cf3e-43b9-82da-4fbdd241576a\") " pod="calico-system/calico-typha-644876d856-jtttc" Mar 19 11:28:32.279652 kubelet[2550]: I0319 11:28:32.279296 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159b8a65-cf3e-43b9-82da-4fbdd241576a-tigera-ca-bundle\") pod \"calico-typha-644876d856-jtttc\" (UID: \"159b8a65-cf3e-43b9-82da-4fbdd241576a\") " pod="calico-system/calico-typha-644876d856-jtttc" Mar 19 11:28:32.307495 systemd[1]: Created slice kubepods-besteffort-pod932b9162_278b_4087_966e_3639a9f5698a.slice - libcontainer container kubepods-besteffort-pod932b9162_278b_4087_966e_3639a9f5698a.slice. Mar 19 11:28:32.379493 kubelet[2550]: I0319 11:28:32.379451 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-cni-log-dir\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379493 kubelet[2550]: I0319 11:28:32.379494 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/932b9162-278b-4087-966e-3639a9f5698a-tigera-ca-bundle\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379649 kubelet[2550]: I0319 11:28:32.379511 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-cni-net-dir\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379649 kubelet[2550]: I0319 11:28:32.379534 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-policysync\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379649 kubelet[2550]: I0319 11:28:32.379552 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-cni-bin-dir\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379649 kubelet[2550]: I0319 11:28:32.379567 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-flexvol-driver-host\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379649 kubelet[2550]: I0319 11:28:32.379595 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-lib-modules\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379770 kubelet[2550]: I0319 11:28:32.379612 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-var-lib-calico\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379770 kubelet[2550]: I0319 11:28:32.379639 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-xtables-lock\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379770 kubelet[2550]: I0319 11:28:32.379654 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/932b9162-278b-4087-966e-3639a9f5698a-node-certs\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379770 kubelet[2550]: I0319 11:28:32.379669 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/932b9162-278b-4087-966e-3639a9f5698a-var-run-calico\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.379770 kubelet[2550]: I0319 11:28:32.379683 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgfj\" (UniqueName: \"kubernetes.io/projected/932b9162-278b-4087-966e-3639a9f5698a-kube-api-access-9zgfj\") pod \"calico-node-t8k9k\" (UID: \"932b9162-278b-4087-966e-3639a9f5698a\") " pod="calico-system/calico-node-t8k9k" Mar 19 11:28:32.411885 kubelet[2550]: E0319 11:28:32.411763 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:32.480925 kubelet[2550]: I0319 11:28:32.480448 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vlq\" (UniqueName: \"kubernetes.io/projected/8140799f-a3c9-4f76-a616-271cd3fce86a-kube-api-access-x5vlq\") pod \"csi-node-driver-p89jv\" (UID: \"8140799f-a3c9-4f76-a616-271cd3fce86a\") " pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:32.480925 kubelet[2550]: I0319 11:28:32.480540 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8140799f-a3c9-4f76-a616-271cd3fce86a-kubelet-dir\") pod \"csi-node-driver-p89jv\" (UID: \"8140799f-a3c9-4f76-a616-271cd3fce86a\") " pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:32.480925 kubelet[2550]: I0319 11:28:32.480568 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8140799f-a3c9-4f76-a616-271cd3fce86a-varrun\") pod \"csi-node-driver-p89jv\" (UID: \"8140799f-a3c9-4f76-a616-271cd3fce86a\") " pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:32.480925 kubelet[2550]: I0319 11:28:32.480604 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8140799f-a3c9-4f76-a616-271cd3fce86a-socket-dir\") pod \"csi-node-driver-p89jv\" (UID: \"8140799f-a3c9-4f76-a616-271cd3fce86a\") " pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:32.480925 kubelet[2550]: I0319 11:28:32.480639 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8140799f-a3c9-4f76-a616-271cd3fce86a-registration-dir\") pod \"csi-node-driver-p89jv\" (UID: \"8140799f-a3c9-4f76-a616-271cd3fce86a\") " pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:32.483755 kubelet[2550]: E0319 11:28:32.483423 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.483755 kubelet[2550]: W0319 11:28:32.483446 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.483755 kubelet[2550]: E0319 11:28:32.483481 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.495341 kubelet[2550]: E0319 11:28:32.493437 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.495341 kubelet[2550]: W0319 11:28:32.493457 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.495341 kubelet[2550]: E0319 11:28:32.493472 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.535143 containerd[1468]: time="2025-03-19T11:28:32.535099948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644876d856-jtttc,Uid:159b8a65-cf3e-43b9-82da-4fbdd241576a,Namespace:calico-system,Attempt:0,}" Mar 19 11:28:32.582049 kubelet[2550]: E0319 11:28:32.582022 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.582049 kubelet[2550]: W0319 11:28:32.582042 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.582267 kubelet[2550]: E0319 11:28:32.582060 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.582267 kubelet[2550]: E0319 11:28:32.582257 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.582267 kubelet[2550]: W0319 11:28:32.582265 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.582339 kubelet[2550]: E0319 11:28:32.582284 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.582595 kubelet[2550]: E0319 11:28:32.582556 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.582595 kubelet[2550]: W0319 11:28:32.582568 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.582595 kubelet[2550]: E0319 11:28:32.582583 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.582788 kubelet[2550]: E0319 11:28:32.582774 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.582788 kubelet[2550]: W0319 11:28:32.582786 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.582844 kubelet[2550]: E0319 11:28:32.582805 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.583023 kubelet[2550]: E0319 11:28:32.583009 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.583023 kubelet[2550]: W0319 11:28:32.583020 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.583096 kubelet[2550]: E0319 11:28:32.583033 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.584737 kubelet[2550]: E0319 11:28:32.584704 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.584737 kubelet[2550]: W0319 11:28:32.584733 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.587006 kubelet[2550]: E0319 11:28:32.585321 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.587006 kubelet[2550]: E0319 11:28:32.586206 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.587223 kubelet[2550]: W0319 11:28:32.587202 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.587262 kubelet[2550]: E0319 11:28:32.587234 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.587510 kubelet[2550]: E0319 11:28:32.587490 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.587510 kubelet[2550]: W0319 11:28:32.587508 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.587589 kubelet[2550]: E0319 11:28:32.587528 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.587912 kubelet[2550]: E0319 11:28:32.587894 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.587912 kubelet[2550]: W0319 11:28:32.587911 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.587981 kubelet[2550]: E0319 11:28:32.587970 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.588728 kubelet[2550]: E0319 11:28:32.588644 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.588728 kubelet[2550]: W0319 11:28:32.588660 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.588728 kubelet[2550]: E0319 11:28:32.588724 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.588915 containerd[1468]: time="2025-03-19T11:28:32.588599683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:32.588915 containerd[1468]: time="2025-03-19T11:28:32.588655885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:32.588915 containerd[1468]: time="2025-03-19T11:28:32.588670925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:32.588915 containerd[1468]: time="2025-03-19T11:28:32.588809850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589015 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.589851 kubelet[2550]: W0319 11:28:32.589026 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589091 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589195 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.589851 kubelet[2550]: W0319 11:28:32.589210 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589256 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589349 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.589851 kubelet[2550]: W0319 11:28:32.589396 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589454 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.589851 kubelet[2550]: E0319 11:28:32.589548 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.592122 kubelet[2550]: W0319 11:28:32.589557 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.592122 kubelet[2550]: E0319 11:28:32.589601 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.592122 kubelet[2550]: E0319 11:28:32.589709 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.592122 kubelet[2550]: W0319 11:28:32.589717 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.592122 kubelet[2550]: E0319 11:28:32.589731 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.592122 kubelet[2550]: E0319 11:28:32.589906 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.592122 kubelet[2550]: W0319 11:28:32.589913 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.592122 kubelet[2550]: E0319 11:28:32.589926 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.599567 kubelet[2550]: E0319 11:28:32.599527 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.599567 kubelet[2550]: W0319 11:28:32.599548 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.599567 kubelet[2550]: E0319 11:28:32.599572 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.599743 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.600314 kubelet[2550]: W0319 11:28:32.599755 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.599768 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.599887 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.600314 kubelet[2550]: W0319 11:28:32.599895 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.599928 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.600012 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.600314 kubelet[2550]: W0319 11:28:32.600019 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.600061 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.600314 kubelet[2550]: E0319 11:28:32.600126 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.601835 kubelet[2550]: W0319 11:28:32.600132 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.601835 kubelet[2550]: E0319 11:28:32.600160 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.601835 kubelet[2550]: E0319 11:28:32.600777 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.601835 kubelet[2550]: W0319 11:28:32.600897 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.601835 kubelet[2550]: E0319 11:28:32.600990 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.601835 kubelet[2550]: E0319 11:28:32.601219 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.601835 kubelet[2550]: W0319 11:28:32.601294 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.601835 kubelet[2550]: E0319 11:28:32.601312 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.604389 kubelet[2550]: E0319 11:28:32.602709 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.604389 kubelet[2550]: W0319 11:28:32.602725 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.604389 kubelet[2550]: E0319 11:28:32.602743 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.604389 kubelet[2550]: E0319 11:28:32.602948 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.604389 kubelet[2550]: W0319 11:28:32.602958 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.604389 kubelet[2550]: E0319 11:28:32.602967 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.617383 containerd[1468]: time="2025-03-19T11:28:32.615070740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8k9k,Uid:932b9162-278b-4087-966e-3639a9f5698a,Namespace:calico-system,Attempt:0,}" Mar 19 11:28:32.617656 systemd[1]: Started cri-containerd-ad3c4b0ea1f0caaedbe3a999288fa298911f7968562fbd95d917b8901cc9cd26.scope - libcontainer container ad3c4b0ea1f0caaedbe3a999288fa298911f7968562fbd95d917b8901cc9cd26. Mar 19 11:28:32.640290 kubelet[2550]: E0319 11:28:32.640249 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.640539 kubelet[2550]: W0319 11:28:32.640327 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.640880 kubelet[2550]: E0319 11:28:32.640352 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.665010 containerd[1468]: time="2025-03-19T11:28:32.664911346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:32.665010 containerd[1468]: time="2025-03-19T11:28:32.664970028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:32.665010 containerd[1468]: time="2025-03-19T11:28:32.664985549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:32.665914 containerd[1468]: time="2025-03-19T11:28:32.665814498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:32.688946 containerd[1468]: time="2025-03-19T11:28:32.688870035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644876d856-jtttc,Uid:159b8a65-cf3e-43b9-82da-4fbdd241576a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad3c4b0ea1f0caaedbe3a999288fa298911f7968562fbd95d917b8901cc9cd26\"" Mar 19 11:28:32.691916 containerd[1468]: time="2025-03-19T11:28:32.691742376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 19 11:28:32.693569 systemd[1]: Started cri-containerd-4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8.scope - libcontainer container 4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8. Mar 19 11:28:32.718595 containerd[1468]: time="2025-03-19T11:28:32.718432962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8k9k,Uid:932b9162-278b-4087-966e-3639a9f5698a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\"" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.779549 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.781159 kubelet[2550]: W0319 11:28:32.779569 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.779586 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.779775 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.781159 kubelet[2550]: W0319 11:28:32.779782 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.779790 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.779930 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.781159 kubelet[2550]: W0319 11:28:32.779937 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.779944 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.781159 kubelet[2550]: E0319 11:28:32.780123 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.781449 kubelet[2550]: W0319 11:28:32.780130 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.781449 kubelet[2550]: E0319 11:28:32.780137 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:32.781449 kubelet[2550]: E0319 11:28:32.780370 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:32.781449 kubelet[2550]: W0319 11:28:32.780380 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:32.781449 kubelet[2550]: E0319 11:28:32.780387 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:33.145117 update_engine[1455]: I20250319 11:28:33.145051 1455 update_attempter.cc:509] Updating boot flags... Mar 19 11:28:33.172453 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3071) Mar 19 11:28:33.223488 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3069) Mar 19 11:28:33.257412 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3069) Mar 19 11:28:33.626146 kubelet[2550]: E0319 11:28:33.626107 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:34.549163 containerd[1468]: time="2025-03-19T11:28:34.549110097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:34.550631 containerd[1468]: time="2025-03-19T11:28:34.550451660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 19 11:28:34.551408 containerd[1468]: time="2025-03-19T11:28:34.551320688Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:34.553290 containerd[1468]: time="2025-03-19T11:28:34.553251950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:34.554640 containerd[1468]: time="2025-03-19T11:28:34.554593673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.86267701s" Mar 19 11:28:34.554640 containerd[1468]: time="2025-03-19T11:28:34.554627754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 19 11:28:34.556787 containerd[1468]: time="2025-03-19T11:28:34.556754982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 19 11:28:34.574455 containerd[1468]: time="2025-03-19T11:28:34.574417148Z" level=info msg="CreateContainer within sandbox \"ad3c4b0ea1f0caaedbe3a999288fa298911f7968562fbd95d917b8901cc9cd26\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 19 11:28:34.597502 containerd[1468]: time="2025-03-19T11:28:34.597460606Z" level=info msg="CreateContainer within sandbox \"ad3c4b0ea1f0caaedbe3a999288fa298911f7968562fbd95d917b8901cc9cd26\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b92a8c8a94b5f1aa2b7caba751e2f2c7699c01cbf13cad19643cb99bc1f5d3e2\"" Mar 19 11:28:34.598059 containerd[1468]: time="2025-03-19T11:28:34.597974183Z" level=info msg="StartContainer for \"b92a8c8a94b5f1aa2b7caba751e2f2c7699c01cbf13cad19643cb99bc1f5d3e2\"" Mar 19 11:28:34.625529 systemd[1]: Started cri-containerd-b92a8c8a94b5f1aa2b7caba751e2f2c7699c01cbf13cad19643cb99bc1f5d3e2.scope - libcontainer container b92a8c8a94b5f1aa2b7caba751e2f2c7699c01cbf13cad19643cb99bc1f5d3e2. Mar 19 11:28:34.697089 containerd[1468]: time="2025-03-19T11:28:34.696434018Z" level=info msg="StartContainer for \"b92a8c8a94b5f1aa2b7caba751e2f2c7699c01cbf13cad19643cb99bc1f5d3e2\" returns successfully" Mar 19 11:28:35.628084 kubelet[2550]: E0319 11:28:35.628024 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.703443 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704174 kubelet[2550]: W0319 11:28:35.703514 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.703534 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.703750 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704174 kubelet[2550]: W0319 11:28:35.703759 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.703777 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.703940 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704174 kubelet[2550]: W0319 11:28:35.703949 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.703957 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.704174 kubelet[2550]: E0319 11:28:35.704136 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704543 kubelet[2550]: W0319 11:28:35.704147 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704543 kubelet[2550]: E0319 11:28:35.704156 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.704543 kubelet[2550]: E0319 11:28:35.704334 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704543 kubelet[2550]: W0319 11:28:35.704344 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704543 kubelet[2550]: E0319 11:28:35.704353 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.704690 kubelet[2550]: E0319 11:28:35.704640 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704690 kubelet[2550]: W0319 11:28:35.704651 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704690 kubelet[2550]: E0319 11:28:35.704660 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.704846 kubelet[2550]: E0319 11:28:35.704830 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.704846 kubelet[2550]: W0319 11:28:35.704841 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.704901 kubelet[2550]: E0319 11:28:35.704849 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.705033 kubelet[2550]: E0319 11:28:35.705005 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.705033 kubelet[2550]: W0319 11:28:35.705019 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.705033 kubelet[2550]: E0319 11:28:35.705027 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.705335 kubelet[2550]: E0319 11:28:35.705319 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.705335 kubelet[2550]: W0319 11:28:35.705331 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.705426 kubelet[2550]: E0319 11:28:35.705340 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.705607 kubelet[2550]: E0319 11:28:35.705593 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.705607 kubelet[2550]: W0319 11:28:35.705605 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.705665 kubelet[2550]: E0319 11:28:35.705617 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.706293 kubelet[2550]: E0319 11:28:35.705792 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.706293 kubelet[2550]: W0319 11:28:35.705804 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.706293 kubelet[2550]: E0319 11:28:35.705813 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.706293 kubelet[2550]: E0319 11:28:35.706040 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.706293 kubelet[2550]: W0319 11:28:35.706051 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.706293 kubelet[2550]: E0319 11:28:35.706085 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.706489 kubelet[2550]: E0319 11:28:35.706421 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.706489 kubelet[2550]: W0319 11:28:35.706431 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.706489 kubelet[2550]: E0319 11:28:35.706441 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.706672 kubelet[2550]: E0319 11:28:35.706660 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.706672 kubelet[2550]: W0319 11:28:35.706671 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.706741 kubelet[2550]: E0319 11:28:35.706681 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.706892 kubelet[2550]: E0319 11:28:35.706880 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.706939 kubelet[2550]: W0319 11:28:35.706894 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.706939 kubelet[2550]: E0319 11:28:35.706903 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.718780 kubelet[2550]: I0319 11:28:35.718670 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-644876d856-jtttc" podStartSLOduration=1.85296353 podStartE2EDuration="3.71865728s" podCreationTimestamp="2025-03-19 11:28:32 +0000 UTC" firstStartedPulling="2025-03-19 11:28:32.690909067 +0000 UTC m=+13.151852926" lastFinishedPulling="2025-03-19 11:28:34.556602777 +0000 UTC m=+15.017546676" observedRunningTime="2025-03-19 11:28:35.718452034 +0000 UTC m=+16.179395933" watchObservedRunningTime="2025-03-19 11:28:35.71865728 +0000 UTC m=+16.179601219" Mar 19 11:28:35.721204 containerd[1468]: time="2025-03-19T11:28:35.721159276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:35.722152 containerd[1468]: time="2025-03-19T11:28:35.722098425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 19 11:28:35.723014 containerd[1468]: time="2025-03-19T11:28:35.722976892Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:35.725235 containerd[1468]: time="2025-03-19T11:28:35.725203520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:35.726987 containerd[1468]: time="2025-03-19T11:28:35.726949613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.17016491s" Mar 19 11:28:35.727041 containerd[1468]: time="2025-03-19T11:28:35.726985774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 19 11:28:35.729428 containerd[1468]: time="2025-03-19T11:28:35.729353567Z" level=info msg="CreateContainer within sandbox \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 19 11:28:35.744410 containerd[1468]: time="2025-03-19T11:28:35.744333704Z" level=info msg="CreateContainer within sandbox \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8\"" Mar 19 11:28:35.744904 containerd[1468]: time="2025-03-19T11:28:35.744871920Z" level=info msg="StartContainer for \"a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8\"" Mar 19 11:28:35.775578 systemd[1]: Started cri-containerd-a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8.scope - libcontainer container a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8. Mar 19 11:28:35.802410 containerd[1468]: time="2025-03-19T11:28:35.802341474Z" level=info msg="StartContainer for \"a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8\" returns successfully" Mar 19 11:28:35.805952 kubelet[2550]: E0319 11:28:35.805863 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.805952 kubelet[2550]: W0319 11:28:35.805884 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.805952 kubelet[2550]: E0319 11:28:35.805900 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.806212 kubelet[2550]: E0319 11:28:35.806171 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.806212 kubelet[2550]: W0319 11:28:35.806202 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.806288 kubelet[2550]: E0319 11:28:35.806216 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.806477 kubelet[2550]: E0319 11:28:35.806448 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.806477 kubelet[2550]: W0319 11:28:35.806463 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.806477 kubelet[2550]: E0319 11:28:35.806474 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.806693 kubelet[2550]: E0319 11:28:35.806669 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.806693 kubelet[2550]: W0319 11:28:35.806680 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.806693 kubelet[2550]: E0319 11:28:35.806690 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.806875 kubelet[2550]: E0319 11:28:35.806863 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.806875 kubelet[2550]: W0319 11:28:35.806874 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.806922 kubelet[2550]: E0319 11:28:35.806882 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.807028 kubelet[2550]: E0319 11:28:35.807018 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.807055 kubelet[2550]: W0319 11:28:35.807027 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.807113 kubelet[2550]: E0319 11:28:35.807093 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.807189 kubelet[2550]: E0319 11:28:35.807177 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.807214 kubelet[2550]: W0319 11:28:35.807188 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.807250 kubelet[2550]: E0319 11:28:35.807228 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.807350 kubelet[2550]: E0319 11:28:35.807340 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.807390 kubelet[2550]: W0319 11:28:35.807350 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.807390 kubelet[2550]: E0319 11:28:35.807375 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.807531 kubelet[2550]: E0319 11:28:35.807518 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.807531 kubelet[2550]: W0319 11:28:35.807529 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.807587 kubelet[2550]: E0319 11:28:35.807544 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.807684 kubelet[2550]: E0319 11:28:35.807674 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.807716 kubelet[2550]: W0319 11:28:35.807683 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.807716 kubelet[2550]: E0319 11:28:35.807693 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.807911 kubelet[2550]: E0319 11:28:35.807898 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.807943 kubelet[2550]: W0319 11:28:35.807910 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.807943 kubelet[2550]: E0319 11:28:35.807925 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.808320 kubelet[2550]: E0319 11:28:35.808303 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.808320 kubelet[2550]: W0319 11:28:35.808320 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.808415 kubelet[2550]: E0319 11:28:35.808365 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.808664 kubelet[2550]: E0319 11:28:35.808647 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.808696 kubelet[2550]: W0319 11:28:35.808664 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.808696 kubelet[2550]: E0319 11:28:35.808678 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.808898 kubelet[2550]: E0319 11:28:35.808885 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.808898 kubelet[2550]: W0319 11:28:35.808897 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.808950 kubelet[2550]: E0319 11:28:35.808914 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.809157 kubelet[2550]: E0319 11:28:35.809128 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.809157 kubelet[2550]: W0319 11:28:35.809144 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.809202 kubelet[2550]: E0319 11:28:35.809164 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.809353 kubelet[2550]: E0319 11:28:35.809340 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.809353 kubelet[2550]: W0319 11:28:35.809353 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.809430 kubelet[2550]: E0319 11:28:35.809374 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.809551 kubelet[2550]: E0319 11:28:35.809538 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.809573 kubelet[2550]: W0319 11:28:35.809553 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.809573 kubelet[2550]: E0319 11:28:35.809563 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.809926 kubelet[2550]: E0319 11:28:35.809913 2550 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:28:35.809951 kubelet[2550]: W0319 11:28:35.809925 2550 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:28:35.809951 kubelet[2550]: E0319 11:28:35.809935 2550 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:28:35.840662 systemd[1]: cri-containerd-a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8.scope: Deactivated successfully. Mar 19 11:28:35.887891 containerd[1468]: time="2025-03-19T11:28:35.881981904Z" level=info msg="shim disconnected" id=a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8 namespace=k8s.io Mar 19 11:28:35.887891 containerd[1468]: time="2025-03-19T11:28:35.887819122Z" level=warning msg="cleaning up after shim disconnected" id=a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8 namespace=k8s.io Mar 19 11:28:35.887891 containerd[1468]: time="2025-03-19T11:28:35.887834763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:28:36.569655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a30c26ad33d8c99bd2bc9fff5acd3980e1640819e452da2161b577c587a422d8-rootfs.mount: Deactivated successfully. Mar 19 11:28:36.707780 containerd[1468]: time="2025-03-19T11:28:36.707590930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 19 11:28:37.626064 kubelet[2550]: E0319 11:28:37.626007 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:39.626923 kubelet[2550]: E0319 11:28:39.626877 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:39.912739 containerd[1468]: time="2025-03-19T11:28:39.912625868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:39.913560 containerd[1468]: time="2025-03-19T11:28:39.913418408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 19 11:28:39.914380 containerd[1468]: time="2025-03-19T11:28:39.914183467Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:39.917987 containerd[1468]: time="2025-03-19T11:28:39.916831494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:39.917987 containerd[1468]: time="2025-03-19T11:28:39.917601034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 3.20988702s" Mar 19 11:28:39.917987 containerd[1468]: time="2025-03-19T11:28:39.917626754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 19 11:28:39.921547 containerd[1468]: time="2025-03-19T11:28:39.921503012Z" level=info msg="CreateContainer within sandbox \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 11:28:39.941310 containerd[1468]: time="2025-03-19T11:28:39.941254912Z" level=info msg="CreateContainer within sandbox \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12\"" Mar 19 11:28:39.941814 containerd[1468]: time="2025-03-19T11:28:39.941732964Z" level=info msg="StartContainer for \"5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12\"" Mar 19 11:28:39.973526 systemd[1]: Started cri-containerd-5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12.scope - libcontainer container 5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12. Mar 19 11:28:39.998938 containerd[1468]: time="2025-03-19T11:28:39.998897730Z" level=info msg="StartContainer for \"5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12\" returns successfully" Mar 19 11:28:40.268727 kubelet[2550]: I0319 11:28:40.268620 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:28:40.504193 systemd[1]: cri-containerd-5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12.scope: Deactivated successfully. Mar 19 11:28:40.504462 systemd[1]: cri-containerd-5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12.scope: Consumed 459ms CPU time, 163.3M memory peak, 4K read from disk, 150.3M written to disk. Mar 19 11:28:40.522221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12-rootfs.mount: Deactivated successfully. Mar 19 11:28:40.592065 kubelet[2550]: I0319 11:28:40.592031 2550 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 19 11:28:40.642636 systemd[1]: Created slice kubepods-burstable-pod3633e4f0_d27d_4033_847e_b1b7705fa5ab.slice - libcontainer container kubepods-burstable-pod3633e4f0_d27d_4033_847e_b1b7705fa5ab.slice. Mar 19 11:28:40.649872 systemd[1]: Created slice kubepods-burstable-pod373671c3_c6e1_4b04_b9bc_0ec6975e8f38.slice - libcontainer container kubepods-burstable-pod373671c3_c6e1_4b04_b9bc_0ec6975e8f38.slice. Mar 19 11:28:40.655012 systemd[1]: Created slice kubepods-besteffort-podcb1e58e2_690b_4405_933a_1a05af6347b1.slice - libcontainer container kubepods-besteffort-podcb1e58e2_690b_4405_933a_1a05af6347b1.slice. Mar 19 11:28:40.658597 systemd[1]: Created slice kubepods-besteffort-podae83a25a_107c_45c2_be2c_8154d796978c.slice - libcontainer container kubepods-besteffort-podae83a25a_107c_45c2_be2c_8154d796978c.slice. Mar 19 11:28:40.665377 containerd[1468]: time="2025-03-19T11:28:40.665300209Z" level=info msg="shim disconnected" id=5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12 namespace=k8s.io Mar 19 11:28:40.665484 containerd[1468]: time="2025-03-19T11:28:40.665373291Z" level=warning msg="cleaning up after shim disconnected" id=5f95efdb4b0af8d458ed99e933591a246141ec558160f30ea8a3e33a0362fa12 namespace=k8s.io Mar 19 11:28:40.665484 containerd[1468]: time="2025-03-19T11:28:40.665390491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:28:40.686995 systemd[1]: Created slice kubepods-besteffort-podbb9726d3_f563_49e9_bde4_e80223a64d32.slice - libcontainer container kubepods-besteffort-podbb9726d3_f563_49e9_bde4_e80223a64d32.slice. Mar 19 11:28:40.716439 containerd[1468]: time="2025-03-19T11:28:40.716335963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 19 11:28:40.789940 kubelet[2550]: I0319 11:28:40.789822 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966r6\" (UniqueName: \"kubernetes.io/projected/373671c3-c6e1-4b04-b9bc-0ec6975e8f38-kube-api-access-966r6\") pod \"coredns-6f6b679f8f-5sdzc\" (UID: \"373671c3-c6e1-4b04-b9bc-0ec6975e8f38\") " pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:40.789940 kubelet[2550]: I0319 11:28:40.789890 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cb1e58e2-690b-4405-933a-1a05af6347b1-calico-apiserver-certs\") pod \"calico-apiserver-769667d9d6-grngk\" (UID: \"cb1e58e2-690b-4405-933a-1a05af6347b1\") " pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:40.790400 kubelet[2550]: I0319 11:28:40.790226 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae83a25a-107c-45c2-be2c-8154d796978c-tigera-ca-bundle\") pod \"calico-kube-controllers-9cbc458fd-8dzcg\" (UID: \"ae83a25a-107c-45c2-be2c-8154d796978c\") " pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:40.790800 kubelet[2550]: I0319 11:28:40.790762 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7k48\" (UniqueName: \"kubernetes.io/projected/ae83a25a-107c-45c2-be2c-8154d796978c-kube-api-access-t7k48\") pod \"calico-kube-controllers-9cbc458fd-8dzcg\" (UID: \"ae83a25a-107c-45c2-be2c-8154d796978c\") " pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:40.790903 kubelet[2550]: I0319 11:28:40.790808 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb9726d3-f563-49e9-bde4-e80223a64d32-calico-apiserver-certs\") pod \"calico-apiserver-769667d9d6-c8hsk\" (UID: \"bb9726d3-f563-49e9-bde4-e80223a64d32\") " pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:40.790903 kubelet[2550]: I0319 11:28:40.790828 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3633e4f0-d27d-4033-847e-b1b7705fa5ab-config-volume\") pod \"coredns-6f6b679f8f-f6wsj\" (UID: \"3633e4f0-d27d-4033-847e-b1b7705fa5ab\") " pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:40.790903 kubelet[2550]: I0319 11:28:40.790847 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfwdg\" (UniqueName: \"kubernetes.io/projected/3633e4f0-d27d-4033-847e-b1b7705fa5ab-kube-api-access-mfwdg\") pod \"coredns-6f6b679f8f-f6wsj\" (UID: \"3633e4f0-d27d-4033-847e-b1b7705fa5ab\") " pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:40.790903 kubelet[2550]: I0319 11:28:40.790882 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6jk\" (UniqueName: \"kubernetes.io/projected/bb9726d3-f563-49e9-bde4-e80223a64d32-kube-api-access-7s6jk\") pod \"calico-apiserver-769667d9d6-c8hsk\" (UID: \"bb9726d3-f563-49e9-bde4-e80223a64d32\") " pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:40.790903 kubelet[2550]: I0319 11:28:40.790903 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/373671c3-c6e1-4b04-b9bc-0ec6975e8f38-config-volume\") pod \"coredns-6f6b679f8f-5sdzc\" (UID: \"373671c3-c6e1-4b04-b9bc-0ec6975e8f38\") " pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:40.791106 kubelet[2550]: I0319 11:28:40.790920 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxrq\" (UniqueName: \"kubernetes.io/projected/cb1e58e2-690b-4405-933a-1a05af6347b1-kube-api-access-smxrq\") pod \"calico-apiserver-769667d9d6-grngk\" (UID: \"cb1e58e2-690b-4405-933a-1a05af6347b1\") " pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:40.950166 containerd[1468]: time="2025-03-19T11:28:40.950014534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:0,}" Mar 19 11:28:40.952155 containerd[1468]: time="2025-03-19T11:28:40.952124746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:0,}" Mar 19 11:28:40.958104 containerd[1468]: time="2025-03-19T11:28:40.957876445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:0,}" Mar 19 11:28:40.961918 containerd[1468]: time="2025-03-19T11:28:40.961831940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:0,}" Mar 19 11:28:41.016666 containerd[1468]: time="2025-03-19T11:28:41.016488046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:0,}" Mar 19 11:28:41.354543 containerd[1468]: time="2025-03-19T11:28:41.354410908Z" level=error msg="Failed to destroy network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.354936 containerd[1468]: time="2025-03-19T11:28:41.354906439Z" level=error msg="encountered an error cleaning up failed sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.355126 containerd[1468]: time="2025-03-19T11:28:41.355102764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.358027 containerd[1468]: time="2025-03-19T11:28:41.357478459Z" level=error msg="Failed to destroy network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.358027 containerd[1468]: time="2025-03-19T11:28:41.357815147Z" level=error msg="encountered an error cleaning up failed sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.358027 containerd[1468]: time="2025-03-19T11:28:41.357889908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.358209 kubelet[2550]: E0319 11:28:41.357619 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.358209 kubelet[2550]: E0319 11:28:41.357699 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:41.358209 kubelet[2550]: E0319 11:28:41.357717 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:41.358312 kubelet[2550]: E0319 11:28:41.357764 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" podUID="ae83a25a-107c-45c2-be2c-8154d796978c" Mar 19 11:28:41.358312 kubelet[2550]: E0319 11:28:41.358037 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.358312 kubelet[2550]: E0319 11:28:41.358124 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:41.358442 kubelet[2550]: E0319 11:28:41.358140 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:41.358442 kubelet[2550]: E0319 11:28:41.358176 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" podUID="cb1e58e2-690b-4405-933a-1a05af6347b1" Mar 19 11:28:41.361094 containerd[1468]: time="2025-03-19T11:28:41.361062262Z" level=error msg="Failed to destroy network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.362623 containerd[1468]: time="2025-03-19T11:28:41.362590457Z" level=error msg="encountered an error cleaning up failed sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.362688 containerd[1468]: time="2025-03-19T11:28:41.362642618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.362711 containerd[1468]: time="2025-03-19T11:28:41.362688139Z" level=error msg="Failed to destroy network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.362927 kubelet[2550]: E0319 11:28:41.362892 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.362999 kubelet[2550]: E0319 11:28:41.362937 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:41.362999 kubelet[2550]: E0319 11:28:41.362953 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:41.362999 kubelet[2550]: E0319 11:28:41.362983 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5sdzc" podUID="373671c3-c6e1-4b04-b9bc-0ec6975e8f38" Mar 19 11:28:41.363577 kubelet[2550]: E0319 11:28:41.363315 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.363577 kubelet[2550]: E0319 11:28:41.363337 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:41.363577 kubelet[2550]: E0319 11:28:41.363386 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:41.363708 containerd[1468]: time="2025-03-19T11:28:41.363003067Z" level=error msg="encountered an error cleaning up failed sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.363708 containerd[1468]: time="2025-03-19T11:28:41.363063988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.363773 kubelet[2550]: E0319 11:28:41.363420 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f6wsj" podUID="3633e4f0-d27d-4033-847e-b1b7705fa5ab" Mar 19 11:28:41.368782 containerd[1468]: time="2025-03-19T11:28:41.368747840Z" level=error msg="Failed to destroy network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.369065 containerd[1468]: time="2025-03-19T11:28:41.369036486Z" level=error msg="encountered an error cleaning up failed sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.369117 containerd[1468]: time="2025-03-19T11:28:41.369097608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.369435 kubelet[2550]: E0319 11:28:41.369251 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.369435 kubelet[2550]: E0319 11:28:41.369290 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:41.369435 kubelet[2550]: E0319 11:28:41.369305 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:41.369556 kubelet[2550]: E0319 11:28:41.369341 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" podUID="bb9726d3-f563-49e9-bde4-e80223a64d32" Mar 19 11:28:41.637846 systemd[1]: Created slice kubepods-besteffort-pod8140799f_a3c9_4f76_a616_271cd3fce86a.slice - libcontainer container kubepods-besteffort-pod8140799f_a3c9_4f76_a616_271cd3fce86a.slice. Mar 19 11:28:41.646958 containerd[1468]: time="2025-03-19T11:28:41.646846156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:0,}" Mar 19 11:28:41.711412 containerd[1468]: time="2025-03-19T11:28:41.711365290Z" level=error msg="Failed to destroy network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.711746 containerd[1468]: time="2025-03-19T11:28:41.711696217Z" level=error msg="encountered an error cleaning up failed sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.711784 containerd[1468]: time="2025-03-19T11:28:41.711758979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.712008 kubelet[2550]: E0319 11:28:41.711966 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:41.712057 kubelet[2550]: E0319 11:28:41.712026 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:41.712057 kubelet[2550]: E0319 11:28:41.712046 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:41.712247 kubelet[2550]: E0319 11:28:41.712082 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:41.717864 kubelet[2550]: I0319 11:28:41.717835 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5" Mar 19 11:28:41.718629 containerd[1468]: time="2025-03-19T11:28:41.718593937Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:28:41.721220 kubelet[2550]: I0319 11:28:41.721164 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.723237444Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.723445409Z" level=info msg="Ensure that sandbox eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3 in task-service has been cleanup successfully" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.723523091Z" level=info msg="Ensure that sandbox b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5 in task-service has been cleanup successfully" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.723788977Z" level=info msg="TearDown network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" successfully" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.723805698Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" returns successfully" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.723906580Z" level=info msg="TearDown network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" successfully" Mar 19 11:28:41.723418 containerd[1468]: time="2025-03-19T11:28:41.724295869Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" returns successfully" Mar 19 11:28:41.724859 containerd[1468]: time="2025-03-19T11:28:41.724827161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:1,}" Mar 19 11:28:41.725067 kubelet[2550]: I0319 11:28:41.724986 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd" Mar 19 11:28:41.726096 containerd[1468]: time="2025-03-19T11:28:41.725454656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:1,}" Mar 19 11:28:41.726551 containerd[1468]: time="2025-03-19T11:28:41.726390997Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:28:41.726551 containerd[1468]: time="2025-03-19T11:28:41.726548441Z" level=info msg="Ensure that sandbox fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd in task-service has been cleanup successfully" Mar 19 11:28:41.727717 kubelet[2550]: I0319 11:28:41.727694 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96" Mar 19 11:28:41.728462 containerd[1468]: time="2025-03-19T11:28:41.728013395Z" level=info msg="TearDown network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" successfully" Mar 19 11:28:41.728462 containerd[1468]: time="2025-03-19T11:28:41.728041076Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" returns successfully" Mar 19 11:28:41.728462 containerd[1468]: time="2025-03-19T11:28:41.728122237Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:28:41.728462 containerd[1468]: time="2025-03-19T11:28:41.728268361Z" level=info msg="Ensure that sandbox 5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96 in task-service has been cleanup successfully" Mar 19 11:28:41.729004 containerd[1468]: time="2025-03-19T11:28:41.728739852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:1,}" Mar 19 11:28:41.737689 containerd[1468]: time="2025-03-19T11:28:41.737641658Z" level=info msg="TearDown network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" successfully" Mar 19 11:28:41.737689 containerd[1468]: time="2025-03-19T11:28:41.737681899Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" returns successfully" Mar 19 11:28:41.738462 containerd[1468]: time="2025-03-19T11:28:41.738335274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:1,}" Mar 19 11:28:41.740098 kubelet[2550]: I0319 11:28:41.739541 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43" Mar 19 11:28:41.740175 containerd[1468]: time="2025-03-19T11:28:41.740149236Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:28:41.740510 containerd[1468]: time="2025-03-19T11:28:41.740345520Z" level=info msg="Ensure that sandbox 840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43 in task-service has been cleanup successfully" Mar 19 11:28:41.742468 containerd[1468]: time="2025-03-19T11:28:41.742275285Z" level=info msg="TearDown network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" successfully" Mar 19 11:28:41.742468 containerd[1468]: time="2025-03-19T11:28:41.742312006Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" returns successfully" Mar 19 11:28:41.743011 containerd[1468]: time="2025-03-19T11:28:41.742893779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:1,}" Mar 19 11:28:41.744963 kubelet[2550]: I0319 11:28:41.744935 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17" Mar 19 11:28:41.745756 containerd[1468]: time="2025-03-19T11:28:41.745550601Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:28:41.745825 containerd[1468]: time="2025-03-19T11:28:41.745808447Z" level=info msg="Ensure that sandbox 97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17 in task-service has been cleanup successfully" Mar 19 11:28:41.746469 containerd[1468]: time="2025-03-19T11:28:41.746034092Z" level=info msg="TearDown network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" successfully" Mar 19 11:28:41.746469 containerd[1468]: time="2025-03-19T11:28:41.746067413Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" returns successfully" Mar 19 11:28:41.746948 containerd[1468]: time="2025-03-19T11:28:41.746543904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:1,}" Mar 19 11:28:41.934223 systemd[1]: run-netns-cni\x2deb22fdf2\x2daa34\x2dd5dc\x2d91ce\x2d2de2f00e750a.mount: Deactivated successfully. Mar 19 11:28:41.934324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43-shm.mount: Deactivated successfully. Mar 19 11:28:41.934397 systemd[1]: run-netns-cni\x2d4d0e34fe\x2d026f\x2d40aa\x2d275a\x2d63e8da88f097.mount: Deactivated successfully. Mar 19 11:28:41.934445 systemd[1]: run-netns-cni\x2da2b1dcb0\x2d3261\x2dfbbb\x2d9a24\x2d339c56cbe399.mount: Deactivated successfully. Mar 19 11:28:41.934494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17-shm.mount: Deactivated successfully. Mar 19 11:28:41.934541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3-shm.mount: Deactivated successfully. Mar 19 11:28:42.132445 containerd[1468]: time="2025-03-19T11:28:42.132082140Z" level=error msg="Failed to destroy network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.132965 containerd[1468]: time="2025-03-19T11:28:42.132604992Z" level=error msg="encountered an error cleaning up failed sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.132965 containerd[1468]: time="2025-03-19T11:28:42.132666713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.133033 kubelet[2550]: E0319 11:28:42.132899 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.133033 kubelet[2550]: E0319 11:28:42.132964 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:42.133033 kubelet[2550]: E0319 11:28:42.132986 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:42.133518 kubelet[2550]: E0319 11:28:42.133027 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f6wsj" podUID="3633e4f0-d27d-4033-847e-b1b7705fa5ab" Mar 19 11:28:42.168484 containerd[1468]: time="2025-03-19T11:28:42.168228381Z" level=error msg="Failed to destroy network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.170354 containerd[1468]: time="2025-03-19T11:28:42.170096543Z" level=error msg="encountered an error cleaning up failed sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.170354 containerd[1468]: time="2025-03-19T11:28:42.170167144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.171130 kubelet[2550]: E0319 11:28:42.170418 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.171130 kubelet[2550]: E0319 11:28:42.170480 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:42.171130 kubelet[2550]: E0319 11:28:42.170500 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:42.174159 kubelet[2550]: E0319 11:28:42.170538 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" podUID="bb9726d3-f563-49e9-bde4-e80223a64d32" Mar 19 11:28:42.180382 containerd[1468]: time="2025-03-19T11:28:42.180182846Z" level=error msg="Failed to destroy network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.180972 containerd[1468]: time="2025-03-19T11:28:42.180932863Z" level=error msg="encountered an error cleaning up failed sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.181064 containerd[1468]: time="2025-03-19T11:28:42.180977584Z" level=error msg="Failed to destroy network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.181147 containerd[1468]: time="2025-03-19T11:28:42.180997305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.181384 kubelet[2550]: E0319 11:28:42.181333 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.181473 containerd[1468]: time="2025-03-19T11:28:42.181424874Z" level=error msg="encountered an error cleaning up failed sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.181566 containerd[1468]: time="2025-03-19T11:28:42.181496396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.181640 kubelet[2550]: E0319 11:28:42.181555 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:42.181812 kubelet[2550]: E0319 11:28:42.181791 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:42.182053 kubelet[2550]: E0319 11:28:42.181928 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:42.182137 kubelet[2550]: E0319 11:28:42.182058 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.182137 kubelet[2550]: E0319 11:28:42.182096 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:42.182137 kubelet[2550]: E0319 11:28:42.182111 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:42.182212 kubelet[2550]: E0319 11:28:42.182140 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5sdzc" podUID="373671c3-c6e1-4b04-b9bc-0ec6975e8f38" Mar 19 11:28:42.185916 containerd[1468]: time="2025-03-19T11:28:42.185541765Z" level=error msg="Failed to destroy network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.186031 containerd[1468]: time="2025-03-19T11:28:42.185901493Z" level=error msg="encountered an error cleaning up failed sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.186031 containerd[1468]: time="2025-03-19T11:28:42.185986215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.186413 kubelet[2550]: E0319 11:28:42.186159 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.186413 kubelet[2550]: E0319 11:28:42.186199 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:42.186413 kubelet[2550]: E0319 11:28:42.186214 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:42.186500 kubelet[2550]: E0319 11:28:42.186250 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" podUID="ae83a25a-107c-45c2-be2c-8154d796978c" Mar 19 11:28:42.187245 containerd[1468]: time="2025-03-19T11:28:42.187201482Z" level=error msg="Failed to destroy network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.190130 containerd[1468]: time="2025-03-19T11:28:42.190083266Z" level=error msg="encountered an error cleaning up failed sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.190203 containerd[1468]: time="2025-03-19T11:28:42.190163348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.190512 kubelet[2550]: E0319 11:28:42.190399 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.190512 kubelet[2550]: E0319 11:28:42.190456 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:42.190512 kubelet[2550]: E0319 11:28:42.190473 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:42.190693 kubelet[2550]: E0319 11:28:42.190655 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" podUID="cb1e58e2-690b-4405-933a-1a05af6347b1" Mar 19 11:28:42.748701 kubelet[2550]: I0319 11:28:42.748668 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88" Mar 19 11:28:42.749976 containerd[1468]: time="2025-03-19T11:28:42.749939799Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" Mar 19 11:28:42.750122 containerd[1468]: time="2025-03-19T11:28:42.750102602Z" level=info msg="Ensure that sandbox 37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88 in task-service has been cleanup successfully" Mar 19 11:28:42.750462 containerd[1468]: time="2025-03-19T11:28:42.750441410Z" level=info msg="TearDown network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" successfully" Mar 19 11:28:42.750531 containerd[1468]: time="2025-03-19T11:28:42.750462810Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" returns successfully" Mar 19 11:28:42.750752 kubelet[2550]: I0319 11:28:42.750730 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434" Mar 19 11:28:42.750927 containerd[1468]: time="2025-03-19T11:28:42.750876219Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:28:42.750983 containerd[1468]: time="2025-03-19T11:28:42.750969221Z" level=info msg="TearDown network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" successfully" Mar 19 11:28:42.750983 containerd[1468]: time="2025-03-19T11:28:42.750981542Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" returns successfully" Mar 19 11:28:42.752138 containerd[1468]: time="2025-03-19T11:28:42.751398071Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" Mar 19 11:28:42.752138 containerd[1468]: time="2025-03-19T11:28:42.751553034Z" level=info msg="Ensure that sandbox eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434 in task-service has been cleanup successfully" Mar 19 11:28:42.752138 containerd[1468]: time="2025-03-19T11:28:42.751825600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:2,}" Mar 19 11:28:42.752138 containerd[1468]: time="2025-03-19T11:28:42.751858841Z" level=info msg="TearDown network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" successfully" Mar 19 11:28:42.752138 containerd[1468]: time="2025-03-19T11:28:42.751873121Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" returns successfully" Mar 19 11:28:42.752695 containerd[1468]: time="2025-03-19T11:28:42.752671179Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:28:42.752770 containerd[1468]: time="2025-03-19T11:28:42.752747181Z" level=info msg="TearDown network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" successfully" Mar 19 11:28:42.752770 containerd[1468]: time="2025-03-19T11:28:42.752761021Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" returns successfully" Mar 19 11:28:42.753344 containerd[1468]: time="2025-03-19T11:28:42.753308113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:2,}" Mar 19 11:28:42.756731 kubelet[2550]: I0319 11:28:42.755780 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15" Mar 19 11:28:42.756830 containerd[1468]: time="2025-03-19T11:28:42.756737349Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" Mar 19 11:28:42.757215 containerd[1468]: time="2025-03-19T11:28:42.757171479Z" level=info msg="Ensure that sandbox 3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15 in task-service has been cleanup successfully" Mar 19 11:28:42.758725 containerd[1468]: time="2025-03-19T11:28:42.758657752Z" level=info msg="TearDown network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" successfully" Mar 19 11:28:42.758725 containerd[1468]: time="2025-03-19T11:28:42.758682312Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" returns successfully" Mar 19 11:28:42.758835 kubelet[2550]: I0319 11:28:42.758685 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa" Mar 19 11:28:42.760329 containerd[1468]: time="2025-03-19T11:28:42.760170505Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" Mar 19 11:28:42.760443 containerd[1468]: time="2025-03-19T11:28:42.760342469Z" level=info msg="Ensure that sandbox 1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa in task-service has been cleanup successfully" Mar 19 11:28:42.760590 containerd[1468]: time="2025-03-19T11:28:42.760563034Z" level=info msg="TearDown network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" successfully" Mar 19 11:28:42.760590 containerd[1468]: time="2025-03-19T11:28:42.760584035Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" returns successfully" Mar 19 11:28:42.760808 containerd[1468]: time="2025-03-19T11:28:42.760686277Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:28:42.761661 containerd[1468]: time="2025-03-19T11:28:42.761383492Z" level=info msg="TearDown network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" successfully" Mar 19 11:28:42.761661 containerd[1468]: time="2025-03-19T11:28:42.761407253Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" returns successfully" Mar 19 11:28:42.762194 containerd[1468]: time="2025-03-19T11:28:42.761859903Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:28:42.762377 containerd[1468]: time="2025-03-19T11:28:42.762336153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:2,}" Mar 19 11:28:42.763674 containerd[1468]: time="2025-03-19T11:28:42.763105250Z" level=info msg="TearDown network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" successfully" Mar 19 11:28:42.763674 containerd[1468]: time="2025-03-19T11:28:42.763130771Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" returns successfully" Mar 19 11:28:42.765154 containerd[1468]: time="2025-03-19T11:28:42.765115415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:2,}" Mar 19 11:28:42.766144 kubelet[2550]: I0319 11:28:42.766110 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff" Mar 19 11:28:42.767743 containerd[1468]: time="2025-03-19T11:28:42.767716953Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" Mar 19 11:28:42.769606 containerd[1468]: time="2025-03-19T11:28:42.769575434Z" level=info msg="Ensure that sandbox 388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff in task-service has been cleanup successfully" Mar 19 11:28:42.772114 kubelet[2550]: I0319 11:28:42.770820 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911" Mar 19 11:28:42.772603 containerd[1468]: time="2025-03-19T11:28:42.772550540Z" level=info msg="TearDown network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" successfully" Mar 19 11:28:42.772723 containerd[1468]: time="2025-03-19T11:28:42.772705623Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" returns successfully" Mar 19 11:28:42.772961 containerd[1468]: time="2025-03-19T11:28:42.772940309Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" Mar 19 11:28:42.774002 containerd[1468]: time="2025-03-19T11:28:42.773800008Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:28:42.774002 containerd[1468]: time="2025-03-19T11:28:42.773912250Z" level=info msg="TearDown network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" successfully" Mar 19 11:28:42.774002 containerd[1468]: time="2025-03-19T11:28:42.773923130Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" returns successfully" Mar 19 11:28:42.774671 containerd[1468]: time="2025-03-19T11:28:42.774628906Z" level=info msg="Ensure that sandbox 5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911 in task-service has been cleanup successfully" Mar 19 11:28:42.775142 containerd[1468]: time="2025-03-19T11:28:42.775121557Z" level=info msg="TearDown network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" successfully" Mar 19 11:28:42.775412 containerd[1468]: time="2025-03-19T11:28:42.775391683Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" returns successfully" Mar 19 11:28:42.775653 containerd[1468]: time="2025-03-19T11:28:42.774844831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:2,}" Mar 19 11:28:42.776161 containerd[1468]: time="2025-03-19T11:28:42.776123379Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:28:42.776252 containerd[1468]: time="2025-03-19T11:28:42.776234822Z" level=info msg="TearDown network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" successfully" Mar 19 11:28:42.776252 containerd[1468]: time="2025-03-19T11:28:42.776249862Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" returns successfully" Mar 19 11:28:42.776937 containerd[1468]: time="2025-03-19T11:28:42.776909797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:2,}" Mar 19 11:28:42.864236 containerd[1468]: time="2025-03-19T11:28:42.864153691Z" level=error msg="Failed to destroy network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.865150 containerd[1468]: time="2025-03-19T11:28:42.865116912Z" level=error msg="encountered an error cleaning up failed sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.865223 containerd[1468]: time="2025-03-19T11:28:42.865186754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.866965 kubelet[2550]: E0319 11:28:42.865404 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.866965 kubelet[2550]: E0319 11:28:42.865461 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:42.866965 kubelet[2550]: E0319 11:28:42.865481 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:42.867099 kubelet[2550]: E0319 11:28:42.865522 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5sdzc" podUID="373671c3-c6e1-4b04-b9bc-0ec6975e8f38" Mar 19 11:28:42.882410 containerd[1468]: time="2025-03-19T11:28:42.882348854Z" level=error msg="Failed to destroy network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.882717 containerd[1468]: time="2025-03-19T11:28:42.882684342Z" level=error msg="encountered an error cleaning up failed sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.882760 containerd[1468]: time="2025-03-19T11:28:42.882745063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.883179 kubelet[2550]: E0319 11:28:42.882959 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.883179 kubelet[2550]: E0319 11:28:42.883069 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:42.883179 kubelet[2550]: E0319 11:28:42.883096 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:42.883334 kubelet[2550]: E0319 11:28:42.883136 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" podUID="cb1e58e2-690b-4405-933a-1a05af6347b1" Mar 19 11:28:42.886891 containerd[1468]: time="2025-03-19T11:28:42.886839114Z" level=error msg="Failed to destroy network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.887713 containerd[1468]: time="2025-03-19T11:28:42.887679732Z" level=error msg="encountered an error cleaning up failed sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.887764 containerd[1468]: time="2025-03-19T11:28:42.887741934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.888260 kubelet[2550]: E0319 11:28:42.887971 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.888260 kubelet[2550]: E0319 11:28:42.888029 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:42.888260 kubelet[2550]: E0319 11:28:42.888046 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:42.888429 kubelet[2550]: E0319 11:28:42.888084 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" podUID="ae83a25a-107c-45c2-be2c-8154d796978c" Mar 19 11:28:42.888682 containerd[1468]: time="2025-03-19T11:28:42.888649354Z" level=error msg="Failed to destroy network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.889498 containerd[1468]: time="2025-03-19T11:28:42.889466372Z" level=error msg="encountered an error cleaning up failed sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.889563 containerd[1468]: time="2025-03-19T11:28:42.889518333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.889734 kubelet[2550]: E0319 11:28:42.889707 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.889785 kubelet[2550]: E0319 11:28:42.889748 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:42.889785 kubelet[2550]: E0319 11:28:42.889764 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:42.889918 kubelet[2550]: E0319 11:28:42.889811 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f6wsj" podUID="3633e4f0-d27d-4033-847e-b1b7705fa5ab" Mar 19 11:28:42.906725 containerd[1468]: time="2025-03-19T11:28:42.906685394Z" level=error msg="Failed to destroy network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.907030 containerd[1468]: time="2025-03-19T11:28:42.906995681Z" level=error msg="encountered an error cleaning up failed sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.907075 containerd[1468]: time="2025-03-19T11:28:42.907052202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.907437 kubelet[2550]: E0319 11:28:42.907229 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.907437 kubelet[2550]: E0319 11:28:42.907280 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:42.907437 kubelet[2550]: E0319 11:28:42.907299 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:42.907592 kubelet[2550]: E0319 11:28:42.907335 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" podUID="bb9726d3-f563-49e9-bde4-e80223a64d32" Mar 19 11:28:42.907685 containerd[1468]: time="2025-03-19T11:28:42.907651335Z" level=error msg="Failed to destroy network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.908129 containerd[1468]: time="2025-03-19T11:28:42.908093145Z" level=error msg="encountered an error cleaning up failed sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.908174 containerd[1468]: time="2025-03-19T11:28:42.908145866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.908330 kubelet[2550]: E0319 11:28:42.908306 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:42.908330 kubelet[2550]: E0319 11:28:42.908345 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:42.908330 kubelet[2550]: E0319 11:28:42.908371 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:42.908626 kubelet[2550]: E0319 11:28:42.908400 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:42.936296 systemd[1]: run-netns-cni\x2df4c903a4\x2d1729\x2d585f\x2d2db4\x2db05118254426.mount: Deactivated successfully. Mar 19 11:28:42.936396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15-shm.mount: Deactivated successfully. Mar 19 11:28:42.936449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911-shm.mount: Deactivated successfully. Mar 19 11:28:42.936497 systemd[1]: run-netns-cni\x2d495de6a0\x2d1d8b\x2d5b57\x2da0e6\x2ddfe1c95fc0be.mount: Deactivated successfully. Mar 19 11:28:42.936541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434-shm.mount: Deactivated successfully. Mar 19 11:28:43.775527 kubelet[2550]: I0319 11:28:43.775403 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44" Mar 19 11:28:43.776174 containerd[1468]: time="2025-03-19T11:28:43.776141523Z" level=info msg="StopPodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\"" Mar 19 11:28:43.776413 containerd[1468]: time="2025-03-19T11:28:43.776304606Z" level=info msg="Ensure that sandbox 0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44 in task-service has been cleanup successfully" Mar 19 11:28:43.778123 systemd[1]: run-netns-cni\x2df263ec2c\x2d7dbd\x2d46cc\x2dea59\x2dfb8d4de8a38f.mount: Deactivated successfully. Mar 19 11:28:43.778274 containerd[1468]: time="2025-03-19T11:28:43.778212807Z" level=info msg="TearDown network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" successfully" Mar 19 11:28:43.778274 containerd[1468]: time="2025-03-19T11:28:43.778235047Z" level=info msg="StopPodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" returns successfully" Mar 19 11:28:43.779684 kubelet[2550]: I0319 11:28:43.779441 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc" Mar 19 11:28:43.779786 containerd[1468]: time="2025-03-19T11:28:43.779014584Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" Mar 19 11:28:43.780494 containerd[1468]: time="2025-03-19T11:28:43.780444534Z" level=info msg="TearDown network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" successfully" Mar 19 11:28:43.780494 containerd[1468]: time="2025-03-19T11:28:43.780486575Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" returns successfully" Mar 19 11:28:43.780987 containerd[1468]: time="2025-03-19T11:28:43.780853023Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:28:43.780987 containerd[1468]: time="2025-03-19T11:28:43.780934585Z" level=info msg="StopPodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\"" Mar 19 11:28:43.780987 containerd[1468]: time="2025-03-19T11:28:43.780949465Z" level=info msg="TearDown network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" successfully" Mar 19 11:28:43.780987 containerd[1468]: time="2025-03-19T11:28:43.780961785Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" returns successfully" Mar 19 11:28:43.781105 containerd[1468]: time="2025-03-19T11:28:43.781065348Z" level=info msg="Ensure that sandbox 4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc in task-service has been cleanup successfully" Mar 19 11:28:43.781715 containerd[1468]: time="2025-03-19T11:28:43.781686281Z" level=info msg="TearDown network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" successfully" Mar 19 11:28:43.781715 containerd[1468]: time="2025-03-19T11:28:43.781709521Z" level=info msg="StopPodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" returns successfully" Mar 19 11:28:43.782154 containerd[1468]: time="2025-03-19T11:28:43.782034648Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" Mar 19 11:28:43.782154 containerd[1468]: time="2025-03-19T11:28:43.782120850Z" level=info msg="TearDown network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" successfully" Mar 19 11:28:43.782154 containerd[1468]: time="2025-03-19T11:28:43.782131490Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" returns successfully" Mar 19 11:28:43.782378 containerd[1468]: time="2025-03-19T11:28:43.782284854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:3,}" Mar 19 11:28:43.783490 systemd[1]: run-netns-cni\x2d3b3f0572\x2d2b67\x2d2608\x2deeb8\x2de80006b5d5fe.mount: Deactivated successfully. Mar 19 11:28:43.785239 kubelet[2550]: I0319 11:28:43.785211 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921" Mar 19 11:28:43.785747 containerd[1468]: time="2025-03-19T11:28:43.785610684Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:28:43.785747 containerd[1468]: time="2025-03-19T11:28:43.785617044Z" level=info msg="StopPodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\"" Mar 19 11:28:43.785880 containerd[1468]: time="2025-03-19T11:28:43.785843409Z" level=info msg="Ensure that sandbox 5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921 in task-service has been cleanup successfully" Mar 19 11:28:43.786096 containerd[1468]: time="2025-03-19T11:28:43.785981412Z" level=info msg="TearDown network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" successfully" Mar 19 11:28:43.786096 containerd[1468]: time="2025-03-19T11:28:43.786008213Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" returns successfully" Mar 19 11:28:43.786246 containerd[1468]: time="2025-03-19T11:28:43.786179336Z" level=info msg="TearDown network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" successfully" Mar 19 11:28:43.786246 containerd[1468]: time="2025-03-19T11:28:43.786203657Z" level=info msg="StopPodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" returns successfully" Mar 19 11:28:43.787016 containerd[1468]: time="2025-03-19T11:28:43.786815630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:3,}" Mar 19 11:28:43.787016 containerd[1468]: time="2025-03-19T11:28:43.786842750Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" Mar 19 11:28:43.787016 containerd[1468]: time="2025-03-19T11:28:43.786964673Z" level=info msg="TearDown network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" successfully" Mar 19 11:28:43.787016 containerd[1468]: time="2025-03-19T11:28:43.786975313Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" returns successfully" Mar 19 11:28:43.787407 containerd[1468]: time="2025-03-19T11:28:43.787381202Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:28:43.787502 containerd[1468]: time="2025-03-19T11:28:43.787483284Z" level=info msg="TearDown network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" successfully" Mar 19 11:28:43.787502 containerd[1468]: time="2025-03-19T11:28:43.787497284Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" returns successfully" Mar 19 11:28:43.787890 systemd[1]: run-netns-cni\x2daaa0fac5\x2d6aa1\x2d4538\x2d53ae\x2dc6c6077ce976.mount: Deactivated successfully. Mar 19 11:28:43.788211 containerd[1468]: time="2025-03-19T11:28:43.788187859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:3,}" Mar 19 11:28:43.789414 kubelet[2550]: I0319 11:28:43.789121 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee" Mar 19 11:28:43.790887 containerd[1468]: time="2025-03-19T11:28:43.790757114Z" level=info msg="StopPodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\"" Mar 19 11:28:43.790981 containerd[1468]: time="2025-03-19T11:28:43.790920077Z" level=info msg="Ensure that sandbox 6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee in task-service has been cleanup successfully" Mar 19 11:28:43.791381 containerd[1468]: time="2025-03-19T11:28:43.791271005Z" level=info msg="TearDown network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" successfully" Mar 19 11:28:43.791381 containerd[1468]: time="2025-03-19T11:28:43.791340326Z" level=info msg="StopPodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" returns successfully" Mar 19 11:28:43.793463 systemd[1]: run-netns-cni\x2da9abd5d7\x2de69b\x2d69ab\x2de31a\x2d43e150b831a4.mount: Deactivated successfully. Mar 19 11:28:43.934317 systemd[1]: run-netns-cni\x2d28baf8b6\x2d65ab\x2da5af\x2d2d4c\x2d4a90ab2684a5.mount: Deactivated successfully. Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.792182824Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.797112809Z" level=info msg="StopPodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803053975Z" level=info msg="TearDown network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803223179Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803244339Z" level=info msg="Ensure that sandbox adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5 in task-service has been cleanup successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.800338357Z" level=info msg="StopPodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803462344Z" level=info msg="TearDown network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803477424Z" level=info msg="StopPodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803563226Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803637147Z" level=info msg="TearDown network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803647228Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803774990Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803831312Z" level=info msg="TearDown network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.803840032Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804037636Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804058076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:3,}" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804100437Z" level=info msg="TearDown network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804109677Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804065997Z" level=info msg="Ensure that sandbox 00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca in task-service has been cleanup successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804434684Z" level=info msg="TearDown network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804449485Z" level=info msg="StopPodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804735771Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804754971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:3,}" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804849253Z" level=info msg="TearDown network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.804893054Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.805169300Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.805242782Z" level=info msg="TearDown network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.805252422Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" returns successfully" Mar 19 11:28:43.945319 containerd[1468]: time="2025-03-19T11:28:43.805693631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:3,}" Mar 19 11:28:43.946046 kubelet[2550]: I0319 11:28:43.796062 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5" Mar 19 11:28:43.946046 kubelet[2550]: I0319 11:28:43.799398 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca" Mar 19 11:28:43.934436 systemd[1]: run-netns-cni\x2dc4313f88\x2d2392\x2de7c6\x2dbc5a\x2d247cf60b7fca.mount: Deactivated successfully. Mar 19 11:28:44.190264 containerd[1468]: time="2025-03-19T11:28:44.190112761Z" level=error msg="Failed to destroy network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.190723 containerd[1468]: time="2025-03-19T11:28:44.190690173Z" level=error msg="encountered an error cleaning up failed sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.190858 containerd[1468]: time="2025-03-19T11:28:44.190831136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.191517 kubelet[2550]: E0319 11:28:44.191470 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.191764 kubelet[2550]: E0319 11:28:44.191559 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:44.191764 kubelet[2550]: E0319 11:28:44.191584 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" Mar 19 11:28:44.191764 kubelet[2550]: E0319 11:28:44.191645 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9cbc458fd-8dzcg_calico-system(ae83a25a-107c-45c2-be2c-8154d796978c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" podUID="ae83a25a-107c-45c2-be2c-8154d796978c" Mar 19 11:28:44.214856 containerd[1468]: time="2025-03-19T11:28:44.214797745Z" level=error msg="Failed to destroy network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.215180 containerd[1468]: time="2025-03-19T11:28:44.215144952Z" level=error msg="encountered an error cleaning up failed sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.215228 containerd[1468]: time="2025-03-19T11:28:44.215210993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.215838 kubelet[2550]: E0319 11:28:44.215423 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.215838 kubelet[2550]: E0319 11:28:44.215479 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:44.215838 kubelet[2550]: E0319 11:28:44.215497 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" Mar 19 11:28:44.216021 kubelet[2550]: E0319 11:28:44.215538 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-c8hsk_calico-apiserver(bb9726d3-f563-49e9-bde4-e80223a64d32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" podUID="bb9726d3-f563-49e9-bde4-e80223a64d32" Mar 19 11:28:44.218905 containerd[1468]: time="2025-03-19T11:28:44.218859708Z" level=error msg="Failed to destroy network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.220161 containerd[1468]: time="2025-03-19T11:28:44.220112373Z" level=error msg="encountered an error cleaning up failed sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.220229 containerd[1468]: time="2025-03-19T11:28:44.220185695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.221851 kubelet[2550]: E0319 11:28:44.220344 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.221851 kubelet[2550]: E0319 11:28:44.220439 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:44.221851 kubelet[2550]: E0319 11:28:44.220457 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-f6wsj" Mar 19 11:28:44.221974 kubelet[2550]: E0319 11:28:44.220499 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-f6wsj_kube-system(3633e4f0-d27d-4033-847e-b1b7705fa5ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-f6wsj" podUID="3633e4f0-d27d-4033-847e-b1b7705fa5ab" Mar 19 11:28:44.225202 containerd[1468]: time="2025-03-19T11:28:44.225148516Z" level=error msg="Failed to destroy network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.226838 containerd[1468]: time="2025-03-19T11:28:44.226755669Z" level=error msg="encountered an error cleaning up failed sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.226838 containerd[1468]: time="2025-03-19T11:28:44.226815950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.228698 containerd[1468]: time="2025-03-19T11:28:44.227502044Z" level=error msg="Failed to destroy network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.228766 kubelet[2550]: E0319 11:28:44.227874 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.228766 kubelet[2550]: E0319 11:28:44.227961 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:44.228766 kubelet[2550]: E0319 11:28:44.227977 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5sdzc" Mar 19 11:28:44.228848 kubelet[2550]: E0319 11:28:44.228018 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5sdzc_kube-system(373671c3-c6e1-4b04-b9bc-0ec6975e8f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5sdzc" podUID="373671c3-c6e1-4b04-b9bc-0ec6975e8f38" Mar 19 11:28:44.229645 containerd[1468]: time="2025-03-19T11:28:44.229059116Z" level=error msg="encountered an error cleaning up failed sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.229645 containerd[1468]: time="2025-03-19T11:28:44.229109437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.229740 kubelet[2550]: E0319 11:28:44.229496 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.229740 kubelet[2550]: E0319 11:28:44.229539 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:44.229740 kubelet[2550]: E0319 11:28:44.229555 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p89jv" Mar 19 11:28:44.229811 kubelet[2550]: E0319 11:28:44.229587 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p89jv_calico-system(8140799f-a3c9-4f76-a616-271cd3fce86a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p89jv" podUID="8140799f-a3c9-4f76-a616-271cd3fce86a" Mar 19 11:28:44.237104 containerd[1468]: time="2025-03-19T11:28:44.237055639Z" level=error msg="Failed to destroy network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.237370 containerd[1468]: time="2025-03-19T11:28:44.237326725Z" level=error msg="encountered an error cleaning up failed sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.237417 containerd[1468]: time="2025-03-19T11:28:44.237394326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.237697 kubelet[2550]: E0319 11:28:44.237555 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:28:44.237697 kubelet[2550]: E0319 11:28:44.237613 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:44.237697 kubelet[2550]: E0319 11:28:44.237628 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" Mar 19 11:28:44.237816 kubelet[2550]: E0319 11:28:44.237658 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769667d9d6-grngk_calico-apiserver(cb1e58e2-690b-4405-933a-1a05af6347b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" podUID="cb1e58e2-690b-4405-933a-1a05af6347b1" Mar 19 11:28:44.403005 containerd[1468]: time="2025-03-19T11:28:44.402937543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:44.403423 containerd[1468]: time="2025-03-19T11:28:44.403367472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 19 11:28:44.406458 containerd[1468]: time="2025-03-19T11:28:44.406422134Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:44.408396 containerd[1468]: time="2025-03-19T11:28:44.408340373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:44.409326 containerd[1468]: time="2025-03-19T11:28:44.409285513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.692849908s" Mar 19 11:28:44.409326 containerd[1468]: time="2025-03-19T11:28:44.409320433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 19 11:28:44.415320 containerd[1468]: time="2025-03-19T11:28:44.415272355Z" level=info msg="CreateContainer within sandbox \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 19 11:28:44.429339 containerd[1468]: time="2025-03-19T11:28:44.429294761Z" level=info msg="CreateContainer within sandbox \"4b4a212e814bf729967f43650be3a89885cb3efb1619ce770523f1d97faa6be8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"05a6c4ae8a98fa8cc3bb5024af07e481ad2f685c2263a1aa84b38b8d7611614b\"" Mar 19 11:28:44.429921 containerd[1468]: time="2025-03-19T11:28:44.429863493Z" level=info msg="StartContainer for \"05a6c4ae8a98fa8cc3bb5024af07e481ad2f685c2263a1aa84b38b8d7611614b\"" Mar 19 11:28:44.480490 systemd[1]: Started cri-containerd-05a6c4ae8a98fa8cc3bb5024af07e481ad2f685c2263a1aa84b38b8d7611614b.scope - libcontainer container 05a6c4ae8a98fa8cc3bb5024af07e481ad2f685c2263a1aa84b38b8d7611614b. Mar 19 11:28:44.514771 containerd[1468]: time="2025-03-19T11:28:44.513102951Z" level=info msg="StartContainer for \"05a6c4ae8a98fa8cc3bb5024af07e481ad2f685c2263a1aa84b38b8d7611614b\" returns successfully" Mar 19 11:28:44.680174 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 19 11:28:44.680270 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 19 11:28:44.819409 kubelet[2550]: I0319 11:28:44.819283 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.820015652Z" level=info msg="StopPodSandbox for \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\"" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.820170415Z" level=info msg="Ensure that sandbox 1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad in task-service has been cleanup successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.822442822Z" level=info msg="TearDown network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\" successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.822462662Z" level=info msg="StopPodSandbox for \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\" returns successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.822769108Z" level=info msg="StopPodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\"" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.822841030Z" level=info msg="TearDown network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.822850110Z" level=info msg="StopPodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" returns successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.824371621Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.824438102Z" level=info msg="TearDown network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.824447223Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" returns successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.825226079Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.825302320Z" level=info msg="TearDown network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.825312320Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" returns successfully" Mar 19 11:28:44.826553 containerd[1468]: time="2025-03-19T11:28:44.826435263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:4,}" Mar 19 11:28:44.828693 kubelet[2550]: I0319 11:28:44.828663 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367" Mar 19 11:28:44.830417 containerd[1468]: time="2025-03-19T11:28:44.830301302Z" level=info msg="StopPodSandbox for \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\"" Mar 19 11:28:44.830417 containerd[1468]: time="2025-03-19T11:28:44.830612868Z" level=info msg="Ensure that sandbox 4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367 in task-service has been cleanup successfully" Mar 19 11:28:44.832342 containerd[1468]: time="2025-03-19T11:28:44.831738211Z" level=info msg="TearDown network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\" successfully" Mar 19 11:28:44.832342 containerd[1468]: time="2025-03-19T11:28:44.831772332Z" level=info msg="StopPodSandbox for \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\" returns successfully" Mar 19 11:28:44.832342 containerd[1468]: time="2025-03-19T11:28:44.832080338Z" level=info msg="StopPodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\"" Mar 19 11:28:44.832342 containerd[1468]: time="2025-03-19T11:28:44.832156940Z" level=info msg="TearDown network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" successfully" Mar 19 11:28:44.832342 containerd[1468]: time="2025-03-19T11:28:44.832166900Z" level=info msg="StopPodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" returns successfully" Mar 19 11:28:44.832635 containerd[1468]: time="2025-03-19T11:28:44.832576829Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" Mar 19 11:28:44.833208 kubelet[2550]: I0319 11:28:44.832608 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27" Mar 19 11:28:44.833277 containerd[1468]: time="2025-03-19T11:28:44.832679031Z" level=info msg="TearDown network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" successfully" Mar 19 11:28:44.833277 containerd[1468]: time="2025-03-19T11:28:44.832726552Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" returns successfully" Mar 19 11:28:44.833277 containerd[1468]: time="2025-03-19T11:28:44.833058438Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:28:44.833277 containerd[1468]: time="2025-03-19T11:28:44.833120560Z" level=info msg="TearDown network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" successfully" Mar 19 11:28:44.833277 containerd[1468]: time="2025-03-19T11:28:44.833129440Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" returns successfully" Mar 19 11:28:44.833277 containerd[1468]: time="2025-03-19T11:28:44.833173081Z" level=info msg="StopPodSandbox for \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\"" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.833402485Z" level=info msg="Ensure that sandbox 142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27 in task-service has been cleanup successfully" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.833676011Z" level=info msg="TearDown network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\" successfully" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.833690891Z" level=info msg="StopPodSandbox for \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\" returns successfully" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.833752892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:4,}" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.833988297Z" level=info msg="StopPodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\"" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.834069339Z" level=info msg="TearDown network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" successfully" Mar 19 11:28:44.834387 containerd[1468]: time="2025-03-19T11:28:44.834078379Z" level=info msg="StopPodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" returns successfully" Mar 19 11:28:44.835387 containerd[1468]: time="2025-03-19T11:28:44.835142721Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" Mar 19 11:28:44.835387 containerd[1468]: time="2025-03-19T11:28:44.835214762Z" level=info msg="TearDown network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" successfully" Mar 19 11:28:44.835387 containerd[1468]: time="2025-03-19T11:28:44.835224123Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" returns successfully" Mar 19 11:28:44.835977 containerd[1468]: time="2025-03-19T11:28:44.835823415Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:28:44.835977 containerd[1468]: time="2025-03-19T11:28:44.835897856Z" level=info msg="TearDown network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" successfully" Mar 19 11:28:44.835977 containerd[1468]: time="2025-03-19T11:28:44.835913137Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" returns successfully" Mar 19 11:28:44.836455 containerd[1468]: time="2025-03-19T11:28:44.836434787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:4,}" Mar 19 11:28:44.836884 kubelet[2550]: I0319 11:28:44.836861 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69" Mar 19 11:28:44.837325 containerd[1468]: time="2025-03-19T11:28:44.837291845Z" level=info msg="StopPodSandbox for \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\"" Mar 19 11:28:44.837927 containerd[1468]: time="2025-03-19T11:28:44.837490529Z" level=info msg="Ensure that sandbox 80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69 in task-service has been cleanup successfully" Mar 19 11:28:44.837927 containerd[1468]: time="2025-03-19T11:28:44.837705853Z" level=info msg="TearDown network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\" successfully" Mar 19 11:28:44.837927 containerd[1468]: time="2025-03-19T11:28:44.837719573Z" level=info msg="StopPodSandbox for \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\" returns successfully" Mar 19 11:28:44.838242 containerd[1468]: time="2025-03-19T11:28:44.838220344Z" level=info msg="StopPodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\"" Mar 19 11:28:44.838338 containerd[1468]: time="2025-03-19T11:28:44.838321666Z" level=info msg="TearDown network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" successfully" Mar 19 11:28:44.838375 containerd[1468]: time="2025-03-19T11:28:44.838337146Z" level=info msg="StopPodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" returns successfully" Mar 19 11:28:44.838840 containerd[1468]: time="2025-03-19T11:28:44.838806636Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" Mar 19 11:28:44.838921 containerd[1468]: time="2025-03-19T11:28:44.838875557Z" level=info msg="TearDown network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" successfully" Mar 19 11:28:44.838948 containerd[1468]: time="2025-03-19T11:28:44.838919438Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" returns successfully" Mar 19 11:28:44.839282 containerd[1468]: time="2025-03-19T11:28:44.839256365Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:28:44.839349 containerd[1468]: time="2025-03-19T11:28:44.839335926Z" level=info msg="TearDown network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" successfully" Mar 19 11:28:44.839656 containerd[1468]: time="2025-03-19T11:28:44.839349087Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" returns successfully" Mar 19 11:28:44.840219 containerd[1468]: time="2025-03-19T11:28:44.840177664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:4,}" Mar 19 11:28:44.841238 kubelet[2550]: I0319 11:28:44.841194 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3" Mar 19 11:28:44.841805 containerd[1468]: time="2025-03-19T11:28:44.841776896Z" level=info msg="StopPodSandbox for \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\"" Mar 19 11:28:44.841934 containerd[1468]: time="2025-03-19T11:28:44.841916019Z" level=info msg="Ensure that sandbox 68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3 in task-service has been cleanup successfully" Mar 19 11:28:44.842086 containerd[1468]: time="2025-03-19T11:28:44.842069542Z" level=info msg="TearDown network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\" successfully" Mar 19 11:28:44.842109 containerd[1468]: time="2025-03-19T11:28:44.842085182Z" level=info msg="StopPodSandbox for \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\" returns successfully" Mar 19 11:28:44.842760 containerd[1468]: time="2025-03-19T11:28:44.842732276Z" level=info msg="StopPodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\"" Mar 19 11:28:44.842841 containerd[1468]: time="2025-03-19T11:28:44.842825718Z" level=info msg="TearDown network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" successfully" Mar 19 11:28:44.842867 containerd[1468]: time="2025-03-19T11:28:44.842839918Z" level=info msg="StopPodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" returns successfully" Mar 19 11:28:44.844532 containerd[1468]: time="2025-03-19T11:28:44.844454591Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" Mar 19 11:28:44.846028 containerd[1468]: time="2025-03-19T11:28:44.845663495Z" level=info msg="TearDown network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" successfully" Mar 19 11:28:44.846249 containerd[1468]: time="2025-03-19T11:28:44.845790978Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" returns successfully" Mar 19 11:28:44.846284 kubelet[2550]: I0319 11:28:44.846189 2550 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e" Mar 19 11:28:44.849146 containerd[1468]: time="2025-03-19T11:28:44.849073045Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:28:44.849244 containerd[1468]: time="2025-03-19T11:28:44.849199128Z" level=info msg="TearDown network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" successfully" Mar 19 11:28:44.849244 containerd[1468]: time="2025-03-19T11:28:44.849211688Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" returns successfully" Mar 19 11:28:44.849484 containerd[1468]: time="2025-03-19T11:28:44.849460573Z" level=info msg="StopPodSandbox for \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\"" Mar 19 11:28:44.850430 containerd[1468]: time="2025-03-19T11:28:44.850400392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:4,}" Mar 19 11:28:44.851351 containerd[1468]: time="2025-03-19T11:28:44.850760479Z" level=info msg="Ensure that sandbox 172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e in task-service has been cleanup successfully" Mar 19 11:28:44.853074 containerd[1468]: time="2025-03-19T11:28:44.853038486Z" level=info msg="TearDown network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\" successfully" Mar 19 11:28:44.853074 containerd[1468]: time="2025-03-19T11:28:44.853064806Z" level=info msg="StopPodSandbox for \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\" returns successfully" Mar 19 11:28:44.862088 containerd[1468]: time="2025-03-19T11:28:44.862045550Z" level=info msg="StopPodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\"" Mar 19 11:28:44.862184 containerd[1468]: time="2025-03-19T11:28:44.862143152Z" level=info msg="TearDown network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" successfully" Mar 19 11:28:44.862184 containerd[1468]: time="2025-03-19T11:28:44.862154112Z" level=info msg="StopPodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" returns successfully" Mar 19 11:28:44.862962 containerd[1468]: time="2025-03-19T11:28:44.862931168Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" Mar 19 11:28:44.863026 containerd[1468]: time="2025-03-19T11:28:44.863014609Z" level=info msg="TearDown network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" successfully" Mar 19 11:28:44.863051 containerd[1468]: time="2025-03-19T11:28:44.863024450Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" returns successfully" Mar 19 11:28:44.864270 containerd[1468]: time="2025-03-19T11:28:44.863987909Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:28:44.864270 containerd[1468]: time="2025-03-19T11:28:44.864065951Z" level=info msg="TearDown network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" successfully" Mar 19 11:28:44.864270 containerd[1468]: time="2025-03-19T11:28:44.864074991Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" returns successfully" Mar 19 11:28:44.864746 containerd[1468]: time="2025-03-19T11:28:44.864720924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:4,}" Mar 19 11:28:44.878419 kubelet[2550]: I0319 11:28:44.878182 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t8k9k" podStartSLOduration=1.187758356 podStartE2EDuration="12.878167679s" podCreationTimestamp="2025-03-19 11:28:32 +0000 UTC" firstStartedPulling="2025-03-19 11:28:32.719704487 +0000 UTC m=+13.180648386" lastFinishedPulling="2025-03-19 11:28:44.41011381 +0000 UTC m=+24.871057709" observedRunningTime="2025-03-19 11:28:44.87773783 +0000 UTC m=+25.338681769" watchObservedRunningTime="2025-03-19 11:28:44.878167679 +0000 UTC m=+25.339111538" Mar 19 11:28:44.946488 systemd[1]: run-netns-cni\x2d8dd0f132\x2d3118\x2d3266\x2d42ee\x2dc3192fef425e.mount: Deactivated successfully. Mar 19 11:28:44.946581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad-shm.mount: Deactivated successfully. Mar 19 11:28:44.946634 systemd[1]: run-netns-cni\x2dca9c94bd\x2dd7be\x2dd4f2\x2dc07c\x2df61c7707ce8a.mount: Deactivated successfully. Mar 19 11:28:44.946677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69-shm.mount: Deactivated successfully. Mar 19 11:28:44.946727 systemd[1]: run-netns-cni\x2d818b9318\x2dbc7d\x2d7ee5\x2d316a\x2df65200b8acfc.mount: Deactivated successfully. Mar 19 11:28:44.946771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3-shm.mount: Deactivated successfully. Mar 19 11:28:44.946818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374551378.mount: Deactivated successfully. Mar 19 11:28:45.341987 systemd-networkd[1401]: calia7682755700: Link UP Mar 19 11:28:45.342140 systemd-networkd[1401]: calia7682755700: Gained carrier Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:44.966 [INFO][4309] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.022 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0 coredns-6f6b679f8f- kube-system 3633e4f0-d27d-4033-847e-b1b7705fa5ab 733 0 2025-03-19 11:28:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-f6wsj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia7682755700 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.022 [INFO][4309] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.213 [INFO][4343] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" HandleID="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Workload="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.233 [INFO][4343] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" HandleID="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Workload="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000351280), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-f6wsj", "timestamp":"2025-03-19 11:28:45.213930958 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.233 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.233 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.234 [INFO][4343] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.241 [INFO][4343] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.314 [INFO][4343] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.318 [INFO][4343] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.320 [INFO][4343] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.321 [INFO][4343] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.321 [INFO][4343] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.323 [INFO][4343] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7 Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.326 [INFO][4343] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.331 [INFO][4343] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.331 [INFO][4343] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" host="localhost" Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.331 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:28:45.351567 containerd[1468]: 2025-03-19 11:28:45.331 [INFO][4343] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" HandleID="k8s-pod-network.3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Workload="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.353460 containerd[1468]: 2025-03-19 11:28:45.334 [INFO][4309] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3633e4f0-d27d-4033-847e-b1b7705fa5ab", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-f6wsj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7682755700", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.353460 containerd[1468]: 2025-03-19 11:28:45.334 [INFO][4309] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.353460 containerd[1468]: 2025-03-19 11:28:45.334 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7682755700 ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.353460 containerd[1468]: 2025-03-19 11:28:45.342 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.353460 containerd[1468]: 2025-03-19 11:28:45.342 [INFO][4309] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3633e4f0-d27d-4033-847e-b1b7705fa5ab", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7", Pod:"coredns-6f6b679f8f-f6wsj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7682755700", MAC:"ba:29:9d:03:94:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.353460 containerd[1468]: 2025-03-19 11:28:45.349 [INFO][4309] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-f6wsj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--f6wsj-eth0" Mar 19 11:28:45.376478 containerd[1468]: time="2025-03-19T11:28:45.376043495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:45.376478 containerd[1468]: time="2025-03-19T11:28:45.376439503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:45.376478 containerd[1468]: time="2025-03-19T11:28:45.376452063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.376698 containerd[1468]: time="2025-03-19T11:28:45.376530785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.397526 systemd[1]: Started cri-containerd-3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7.scope - libcontainer container 3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7. Mar 19 11:28:45.408184 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:28:45.426138 containerd[1468]: time="2025-03-19T11:28:45.426082956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f6wsj,Uid:3633e4f0-d27d-4033-847e-b1b7705fa5ab,Namespace:kube-system,Attempt:4,} returns sandbox id \"3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7\"" Mar 19 11:28:45.429333 containerd[1468]: time="2025-03-19T11:28:45.429294659Z" level=info msg="CreateContainer within sandbox \"3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:28:45.444845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285033689.mount: Deactivated successfully. Mar 19 11:28:45.447926 systemd-networkd[1401]: cali2425360d83b: Link UP Mar 19 11:28:45.448148 systemd-networkd[1401]: cali2425360d83b: Gained carrier Mar 19 11:28:45.450786 containerd[1468]: time="2025-03-19T11:28:45.450748519Z" level=info msg="CreateContainer within sandbox \"3e832b5d9392bd2b22d20a5cf11fee11847a1b0bfb0badce9d40b793595d46f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"194605456781c7d8f41286388467dce72f7fd6c4b0527e5d5e11c41a88d94449\"" Mar 19 11:28:45.453379 containerd[1468]: time="2025-03-19T11:28:45.453309209Z" level=info msg="StartContainer for \"194605456781c7d8f41286388467dce72f7fd6c4b0527e5d5e11c41a88d94449\"" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:44.931 [INFO][4292] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.025 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0 calico-apiserver-769667d9d6- calico-apiserver bb9726d3-f563-49e9-bde4-e80223a64d32 738 0 2025-03-19 11:28:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769667d9d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-769667d9d6-c8hsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2425360d83b [] []}} ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.025 [INFO][4292] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.213 [INFO][4348] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" HandleID="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Workload="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.237 [INFO][4348] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" HandleID="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Workload="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005040d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-769667d9d6-c8hsk", "timestamp":"2025-03-19 11:28:45.213925558 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.237 [INFO][4348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.331 [INFO][4348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.331 [INFO][4348] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.337 [INFO][4348] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.341 [INFO][4348] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.424 [INFO][4348] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.427 [INFO][4348] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.431 [INFO][4348] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.431 [INFO][4348] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.433 [INFO][4348] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3 Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.436 [INFO][4348] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.441 [INFO][4348] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.441 [INFO][4348] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" host="localhost" Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.441 [INFO][4348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:28:45.466568 containerd[1468]: 2025-03-19 11:28:45.441 [INFO][4348] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" HandleID="k8s-pod-network.a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Workload="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.467082 containerd[1468]: 2025-03-19 11:28:45.445 [INFO][4292] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0", GenerateName:"calico-apiserver-769667d9d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb9726d3-f563-49e9-bde4-e80223a64d32", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769667d9d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-769667d9d6-c8hsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2425360d83b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.467082 containerd[1468]: 2025-03-19 11:28:45.445 [INFO][4292] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.467082 containerd[1468]: 2025-03-19 11:28:45.445 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2425360d83b ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.467082 containerd[1468]: 2025-03-19 11:28:45.448 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.467082 containerd[1468]: 2025-03-19 11:28:45.449 [INFO][4292] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0", GenerateName:"calico-apiserver-769667d9d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb9726d3-f563-49e9-bde4-e80223a64d32", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769667d9d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3", Pod:"calico-apiserver-769667d9d6-c8hsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2425360d83b", MAC:"e6:89:dd:63:36:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.467082 containerd[1468]: 2025-03-19 11:28:45.463 [INFO][4292] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-c8hsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--c8hsk-eth0" Mar 19 11:28:45.481538 systemd[1]: Started cri-containerd-194605456781c7d8f41286388467dce72f7fd6c4b0527e5d5e11c41a88d94449.scope - libcontainer container 194605456781c7d8f41286388467dce72f7fd6c4b0527e5d5e11c41a88d94449. Mar 19 11:28:45.487966 containerd[1468]: time="2025-03-19T11:28:45.487857967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:45.487966 containerd[1468]: time="2025-03-19T11:28:45.487931088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:45.487966 containerd[1468]: time="2025-03-19T11:28:45.487942568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.488247 containerd[1468]: time="2025-03-19T11:28:45.488201013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.503504 systemd[1]: Started cri-containerd-a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3.scope - libcontainer container a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3. Mar 19 11:28:45.513923 containerd[1468]: time="2025-03-19T11:28:45.513848436Z" level=info msg="StartContainer for \"194605456781c7d8f41286388467dce72f7fd6c4b0527e5d5e11c41a88d94449\" returns successfully" Mar 19 11:28:45.517741 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:28:45.551095 systemd-networkd[1401]: calib441cab6279: Link UP Mar 19 11:28:45.552895 systemd-networkd[1401]: calib441cab6279: Gained carrier Mar 19 11:28:45.555292 containerd[1468]: time="2025-03-19T11:28:45.555052684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-c8hsk,Uid:bb9726d3-f563-49e9-bde4-e80223a64d32,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3\"" Mar 19 11:28:45.560543 containerd[1468]: time="2025-03-19T11:28:45.559858738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:44.919 [INFO][4282] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.019 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0 coredns-6f6b679f8f- kube-system 373671c3-c6e1-4b04-b9bc-0ec6975e8f38 734 0 2025-03-19 11:28:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-5sdzc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib441cab6279 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.019 [INFO][4282] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.217 [INFO][4357] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" HandleID="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Workload="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.237 [INFO][4357] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" HandleID="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Workload="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031d2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-5sdzc", "timestamp":"2025-03-19 11:28:45.21761763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.237 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.441 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.441 [INFO][4357] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.445 [INFO][4357] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.452 [INFO][4357] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.520 [INFO][4357] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.522 [INFO][4357] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.526 [INFO][4357] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.526 [INFO][4357] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.528 [INFO][4357] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.533 [INFO][4357] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.539 [INFO][4357] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.539 [INFO][4357] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" host="localhost" Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.539 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:28:45.568390 containerd[1468]: 2025-03-19 11:28:45.539 [INFO][4357] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" HandleID="k8s-pod-network.3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Workload="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.568859 containerd[1468]: 2025-03-19 11:28:45.546 [INFO][4282] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"373671c3-c6e1-4b04-b9bc-0ec6975e8f38", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-5sdzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib441cab6279", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.568859 containerd[1468]: 2025-03-19 11:28:45.546 [INFO][4282] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.568859 containerd[1468]: 2025-03-19 11:28:45.546 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib441cab6279 ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.568859 containerd[1468]: 2025-03-19 11:28:45.553 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.568859 containerd[1468]: 2025-03-19 11:28:45.554 [INFO][4282] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"373671c3-c6e1-4b04-b9bc-0ec6975e8f38", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db", Pod:"coredns-6f6b679f8f-5sdzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib441cab6279", MAC:"72:ce:1f:c4:b1:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.568859 containerd[1468]: 2025-03-19 11:28:45.566 [INFO][4282] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db" Namespace="kube-system" Pod="coredns-6f6b679f8f-5sdzc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--5sdzc-eth0" Mar 19 11:28:45.587824 containerd[1468]: time="2025-03-19T11:28:45.586881787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:45.587824 containerd[1468]: time="2025-03-19T11:28:45.587007510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:45.587824 containerd[1468]: time="2025-03-19T11:28:45.587025390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.587824 containerd[1468]: time="2025-03-19T11:28:45.587702883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.611557 systemd[1]: Started cri-containerd-3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db.scope - libcontainer container 3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db. Mar 19 11:28:45.625299 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:28:45.652033 containerd[1468]: time="2025-03-19T11:28:45.651819540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5sdzc,Uid:373671c3-c6e1-4b04-b9bc-0ec6975e8f38,Namespace:kube-system,Attempt:4,} returns sandbox id \"3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db\"" Mar 19 11:28:45.656201 containerd[1468]: time="2025-03-19T11:28:45.655581574Z" level=info msg="CreateContainer within sandbox \"3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:28:45.662692 systemd-networkd[1401]: calidfc17317186: Link UP Mar 19 11:28:45.665885 systemd-networkd[1401]: calidfc17317186: Gained carrier Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:44.886 [INFO][4258] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.017 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0 calico-kube-controllers-9cbc458fd- calico-system ae83a25a-107c-45c2-be2c-8154d796978c 736 0 2025-03-19 11:28:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9cbc458fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9cbc458fd-8dzcg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidfc17317186 [] []}} ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.018 [INFO][4258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.216 [INFO][4345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" HandleID="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Workload="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.238 [INFO][4345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" HandleID="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Workload="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004266c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9cbc458fd-8dzcg", "timestamp":"2025-03-19 11:28:45.216499568 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.238 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.539 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.539 [INFO][4345] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.547 [INFO][4345] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.552 [INFO][4345] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.624 [INFO][4345] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.628 [INFO][4345] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.635 [INFO][4345] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.635 [INFO][4345] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.638 [INFO][4345] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.643 [INFO][4345] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.650 [INFO][4345] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.650 [INFO][4345] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" host="localhost" Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.650 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:28:45.680642 containerd[1468]: 2025-03-19 11:28:45.650 [INFO][4345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" HandleID="k8s-pod-network.87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Workload="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.681181 containerd[1468]: 2025-03-19 11:28:45.655 [INFO][4258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0", GenerateName:"calico-kube-controllers-9cbc458fd-", Namespace:"calico-system", SelfLink:"", UID:"ae83a25a-107c-45c2-be2c-8154d796978c", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cbc458fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9cbc458fd-8dzcg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfc17317186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.681181 containerd[1468]: 2025-03-19 11:28:45.655 [INFO][4258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.681181 containerd[1468]: 2025-03-19 11:28:45.655 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfc17317186 ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.681181 containerd[1468]: 2025-03-19 11:28:45.665 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.681181 containerd[1468]: 2025-03-19 11:28:45.666 [INFO][4258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0", GenerateName:"calico-kube-controllers-9cbc458fd-", Namespace:"calico-system", SelfLink:"", UID:"ae83a25a-107c-45c2-be2c-8154d796978c", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cbc458fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d", Pod:"calico-kube-controllers-9cbc458fd-8dzcg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfc17317186", MAC:"c2:c5:9a:75:aa:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.681181 containerd[1468]: 2025-03-19 11:28:45.675 [INFO][4258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d" Namespace="calico-system" Pod="calico-kube-controllers-9cbc458fd-8dzcg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cbc458fd--8dzcg-eth0" Mar 19 11:28:45.681181 containerd[1468]: time="2025-03-19T11:28:45.681048353Z" level=info msg="CreateContainer within sandbox \"3475130840293c0e4f96508fcdcd214531aabe28418c482077c7efcd112d52db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ddf13d8f184c253006462ae174a487c2e29c6686d69ad4bb30a1eea1c1efa5a\"" Mar 19 11:28:45.681683 containerd[1468]: time="2025-03-19T11:28:45.681657045Z" level=info msg="StartContainer for \"4ddf13d8f184c253006462ae174a487c2e29c6686d69ad4bb30a1eea1c1efa5a\"" Mar 19 11:28:45.711525 systemd[1]: Started cri-containerd-4ddf13d8f184c253006462ae174a487c2e29c6686d69ad4bb30a1eea1c1efa5a.scope - libcontainer container 4ddf13d8f184c253006462ae174a487c2e29c6686d69ad4bb30a1eea1c1efa5a. Mar 19 11:28:45.712118 containerd[1468]: time="2025-03-19T11:28:45.711507710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:45.712118 containerd[1468]: time="2025-03-19T11:28:45.711563431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:45.712118 containerd[1468]: time="2025-03-19T11:28:45.711578991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.712118 containerd[1468]: time="2025-03-19T11:28:45.711650713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.730587 systemd[1]: Started cri-containerd-87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d.scope - libcontainer container 87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d. Mar 19 11:28:45.744758 systemd-networkd[1401]: calib00ee305eb4: Link UP Mar 19 11:28:45.745452 systemd-networkd[1401]: calib00ee305eb4: Gained carrier Mar 19 11:28:45.748680 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:28:45.762803 containerd[1468]: time="2025-03-19T11:28:45.761926578Z" level=info msg="StartContainer for \"4ddf13d8f184c253006462ae174a487c2e29c6686d69ad4bb30a1eea1c1efa5a\" returns successfully" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.011 [INFO][4318] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.038 [INFO][4318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--p89jv-eth0 csi-node-driver- calico-system 8140799f-a3c9-4f76-a616-271cd3fce86a 617 0 2025-03-19 11:28:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-p89jv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib00ee305eb4 [] []}} ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.039 [INFO][4318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.218 [INFO][4375] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" HandleID="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Workload="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.242 [INFO][4375] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" HandleID="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Workload="localhost-k8s-csi--node--driver--p89jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000297d00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-p89jv", "timestamp":"2025-03-19 11:28:45.218376845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.242 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.650 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.651 [INFO][4375] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.654 [INFO][4375] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.672 [INFO][4375] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.723 [INFO][4375] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.725 [INFO][4375] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.727 [INFO][4375] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.727 [INFO][4375] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.728 [INFO][4375] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.732 [INFO][4375] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.738 [INFO][4375] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.738 [INFO][4375] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" host="localhost" Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.739 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:28:45.763408 containerd[1468]: 2025-03-19 11:28:45.739 [INFO][4375] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" HandleID="k8s-pod-network.993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Workload="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.763868 containerd[1468]: 2025-03-19 11:28:45.742 [INFO][4318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p89jv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8140799f-a3c9-4f76-a616-271cd3fce86a", ResourceVersion:"617", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-p89jv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib00ee305eb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.763868 containerd[1468]: 2025-03-19 11:28:45.742 [INFO][4318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.763868 containerd[1468]: 2025-03-19 11:28:45.742 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib00ee305eb4 ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.763868 containerd[1468]: 2025-03-19 11:28:45.745 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.763868 containerd[1468]: 2025-03-19 11:28:45.747 [INFO][4318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p89jv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8140799f-a3c9-4f76-a616-271cd3fce86a", ResourceVersion:"617", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d", Pod:"csi-node-driver-p89jv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib00ee305eb4", MAC:"de:22:e4:72:5f:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.763868 containerd[1468]: 2025-03-19 11:28:45.760 [INFO][4318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d" Namespace="calico-system" Pod="csi-node-driver-p89jv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p89jv-eth0" Mar 19 11:28:45.787334 containerd[1468]: time="2025-03-19T11:28:45.786971189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:45.787334 containerd[1468]: time="2025-03-19T11:28:45.787042390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:45.787334 containerd[1468]: time="2025-03-19T11:28:45.787057990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.787881 containerd[1468]: time="2025-03-19T11:28:45.787451758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.820677 systemd[1]: Started cri-containerd-993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d.scope - libcontainer container 993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d. Mar 19 11:28:45.822032 containerd[1468]: time="2025-03-19T11:28:45.821996355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cbc458fd-8dzcg,Uid:ae83a25a-107c-45c2-be2c-8154d796978c,Namespace:calico-system,Attempt:4,} returns sandbox id \"87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d\"" Mar 19 11:28:45.843042 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:28:45.849795 systemd-networkd[1401]: calib90a249ca4d: Link UP Mar 19 11:28:45.849997 systemd-networkd[1401]: calib90a249ca4d: Gained carrier Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:44.913 [INFO][4271] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.022 [INFO][4271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0 calico-apiserver-769667d9d6- calico-apiserver cb1e58e2-690b-4405-933a-1a05af6347b1 735 0 2025-03-19 11:28:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769667d9d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-769667d9d6-grngk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib90a249ca4d [] []}} ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.022 [INFO][4271] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.236 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" HandleID="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Workload="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.248 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" HandleID="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Workload="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-769667d9d6-grngk", "timestamp":"2025-03-19 11:28:45.236647203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.248 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.738 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.739 [INFO][4355] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.756 [INFO][4355] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.773 [INFO][4355] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.827 [INFO][4355] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.829 [INFO][4355] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.831 [INFO][4355] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.831 [INFO][4355] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.833 [INFO][4355] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7 Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.836 [INFO][4355] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.841 [INFO][4355] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.842 [INFO][4355] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" host="localhost" Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.842 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:28:45.862580 containerd[1468]: 2025-03-19 11:28:45.842 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" HandleID="k8s-pod-network.5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Workload="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.863253 containerd[1468]: 2025-03-19 11:28:45.846 [INFO][4271] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0", GenerateName:"calico-apiserver-769667d9d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb1e58e2-690b-4405-933a-1a05af6347b1", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769667d9d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-769667d9d6-grngk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib90a249ca4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.863253 containerd[1468]: 2025-03-19 11:28:45.846 [INFO][4271] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.863253 containerd[1468]: 2025-03-19 11:28:45.846 [INFO][4271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib90a249ca4d ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.863253 containerd[1468]: 2025-03-19 11:28:45.850 [INFO][4271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.863253 containerd[1468]: 2025-03-19 11:28:45.850 [INFO][4271] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0", GenerateName:"calico-apiserver-769667d9d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb1e58e2-690b-4405-933a-1a05af6347b1", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769667d9d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7", Pod:"calico-apiserver-769667d9d6-grngk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib90a249ca4d", MAC:"5e:7e:fe:99:fe:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:28:45.863253 containerd[1468]: 2025-03-19 11:28:45.860 [INFO][4271] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7" Namespace="calico-apiserver" Pod="calico-apiserver-769667d9d6-grngk" WorkloadEndpoint="localhost-k8s-calico--apiserver--769667d9d6--grngk-eth0" Mar 19 11:28:45.865344 containerd[1468]: time="2025-03-19T11:28:45.865314644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p89jv,Uid:8140799f-a3c9-4f76-a616-271cd3fce86a,Namespace:calico-system,Attempt:4,} returns sandbox id \"993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d\"" Mar 19 11:28:45.878516 kubelet[2550]: I0319 11:28:45.878159 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:28:45.887868 kubelet[2550]: I0319 11:28:45.887805 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-f6wsj" podStartSLOduration=20.887788805 podStartE2EDuration="20.887788805s" podCreationTimestamp="2025-03-19 11:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:28:45.883925449 +0000 UTC m=+26.344869388" watchObservedRunningTime="2025-03-19 11:28:45.887788805 +0000 UTC m=+26.348732664" Mar 19 11:28:45.889863 containerd[1468]: time="2025-03-19T11:28:45.889533919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:28:45.889863 containerd[1468]: time="2025-03-19T11:28:45.889590840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:28:45.889863 containerd[1468]: time="2025-03-19T11:28:45.889606160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.889863 containerd[1468]: time="2025-03-19T11:28:45.889683322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:28:45.921538 systemd[1]: Started cri-containerd-5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7.scope - libcontainer container 5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7. Mar 19 11:28:45.956053 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:28:45.980443 containerd[1468]: time="2025-03-19T11:28:45.980404860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769667d9d6-grngk,Uid:cb1e58e2-690b-4405-933a-1a05af6347b1,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7\"" Mar 19 11:28:46.323396 kernel: bpftool[4970]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 19 11:28:46.482326 systemd-networkd[1401]: vxlan.calico: Link UP Mar 19 11:28:46.482332 systemd-networkd[1401]: vxlan.calico: Gained carrier Mar 19 11:28:46.799479 systemd-networkd[1401]: calib441cab6279: Gained IPv6LL Mar 19 11:28:46.929224 systemd-networkd[1401]: calidfc17317186: Gained IPv6LL Mar 19 11:28:46.992079 systemd-networkd[1401]: cali2425360d83b: Gained IPv6LL Mar 19 11:28:46.992860 systemd-networkd[1401]: calib90a249ca4d: Gained IPv6LL Mar 19 11:28:47.189059 containerd[1468]: time="2025-03-19T11:28:47.188269507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:47.189059 containerd[1468]: time="2025-03-19T11:28:47.188762196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 19 11:28:47.189730 containerd[1468]: time="2025-03-19T11:28:47.189700493Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:47.193739 containerd[1468]: time="2025-03-19T11:28:47.193663725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:47.193827 containerd[1468]: time="2025-03-19T11:28:47.193768287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.633878869s" Mar 19 11:28:47.193827 containerd[1468]: time="2025-03-19T11:28:47.193804168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 19 11:28:47.195631 containerd[1468]: time="2025-03-19T11:28:47.195603280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 19 11:28:47.196722 containerd[1468]: time="2025-03-19T11:28:47.196692900Z" level=info msg="CreateContainer within sandbox \"a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 19 11:28:47.208881 containerd[1468]: time="2025-03-19T11:28:47.208835840Z" level=info msg="CreateContainer within sandbox \"a5e1b95840f47daa0b42b78713b66e5ce0d3663745e2b1ab447df258bdb68cd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64d5397f147678c6a58cd025f137e82c209605219851ebaa41524ea60e8aff73\"" Mar 19 11:28:47.209434 containerd[1468]: time="2025-03-19T11:28:47.209405971Z" level=info msg="StartContainer for \"64d5397f147678c6a58cd025f137e82c209605219851ebaa41524ea60e8aff73\"" Mar 19 11:28:47.247582 systemd-networkd[1401]: calia7682755700: Gained IPv6LL Mar 19 11:28:47.250584 systemd[1]: Started cri-containerd-64d5397f147678c6a58cd025f137e82c209605219851ebaa41524ea60e8aff73.scope - libcontainer container 64d5397f147678c6a58cd025f137e82c209605219851ebaa41524ea60e8aff73. Mar 19 11:28:47.279243 containerd[1468]: time="2025-03-19T11:28:47.279204597Z" level=info msg="StartContainer for \"64d5397f147678c6a58cd025f137e82c209605219851ebaa41524ea60e8aff73\" returns successfully" Mar 19 11:28:47.696472 systemd-networkd[1401]: calib00ee305eb4: Gained IPv6LL Mar 19 11:28:47.916982 kubelet[2550]: I0319 11:28:47.915951 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-769667d9d6-c8hsk" podStartSLOduration=14.279702395 podStartE2EDuration="15.915924387s" podCreationTimestamp="2025-03-19 11:28:32 +0000 UTC" firstStartedPulling="2025-03-19 11:28:45.559150044 +0000 UTC m=+26.020093943" lastFinishedPulling="2025-03-19 11:28:47.195371956 +0000 UTC m=+27.656315935" observedRunningTime="2025-03-19 11:28:47.915432338 +0000 UTC m=+28.376376237" watchObservedRunningTime="2025-03-19 11:28:47.915924387 +0000 UTC m=+28.376868286" Mar 19 11:28:47.918851 kubelet[2550]: I0319 11:28:47.917723 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5sdzc" podStartSLOduration=22.91771246 podStartE2EDuration="22.91771246s" podCreationTimestamp="2025-03-19 11:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:28:45.910434889 +0000 UTC m=+26.371378828" watchObservedRunningTime="2025-03-19 11:28:47.91771246 +0000 UTC m=+28.378656359" Mar 19 11:28:48.335550 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Mar 19 11:28:48.636664 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:39742.service - OpenSSH per-connection server daemon (10.0.0.1:39742). Mar 19 11:28:48.706511 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 39742 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:28:48.708409 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:28:48.714778 systemd-logind[1451]: New session 8 of user core. Mar 19 11:28:48.725496 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:28:48.800284 containerd[1468]: time="2025-03-19T11:28:48.800228100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:48.800984 containerd[1468]: time="2025-03-19T11:28:48.800895472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 19 11:28:48.801518 containerd[1468]: time="2025-03-19T11:28:48.801491802Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:48.804424 containerd[1468]: time="2025-03-19T11:28:48.804385573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:48.805619 containerd[1468]: time="2025-03-19T11:28:48.805236468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.609604867s" Mar 19 11:28:48.805619 containerd[1468]: time="2025-03-19T11:28:48.805269188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 19 11:28:48.807354 containerd[1468]: time="2025-03-19T11:28:48.807321624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 19 11:28:48.818247 containerd[1468]: time="2025-03-19T11:28:48.817228877Z" level=info msg="CreateContainer within sandbox \"87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 19 11:28:48.828520 containerd[1468]: time="2025-03-19T11:28:48.828478474Z" level=info msg="CreateContainer within sandbox \"87ac7756c0f364dc1322b820f975f981811e51be6a2bae0728bf52649252fe1d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87\"" Mar 19 11:28:48.829924 containerd[1468]: time="2025-03-19T11:28:48.829886019Z" level=info msg="StartContainer for \"b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87\"" Mar 19 11:28:48.879525 systemd[1]: Started cri-containerd-b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87.scope - libcontainer container b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87. Mar 19 11:28:48.909970 kubelet[2550]: I0319 11:28:48.909177 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:28:48.974717 containerd[1468]: time="2025-03-19T11:28:48.974628309Z" level=info msg="StartContainer for \"b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87\" returns successfully" Mar 19 11:28:49.024627 sshd[5131]: Connection closed by 10.0.0.1 port 39742 Mar 19 11:28:49.023761 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Mar 19 11:28:49.027080 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:39742.service: Deactivated successfully. Mar 19 11:28:49.030141 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:28:49.033987 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:28:49.034761 systemd-logind[1451]: Removed session 8. Mar 19 11:28:49.842863 containerd[1468]: time="2025-03-19T11:28:49.842774801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:49.843556 containerd[1468]: time="2025-03-19T11:28:49.843518654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 19 11:28:49.844218 containerd[1468]: time="2025-03-19T11:28:49.844177825Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:49.846996 containerd[1468]: time="2025-03-19T11:28:49.846945791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:49.848012 containerd[1468]: time="2025-03-19T11:28:49.847977329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.040619424s" Mar 19 11:28:49.848012 containerd[1468]: time="2025-03-19T11:28:49.848008729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 19 11:28:49.849027 containerd[1468]: time="2025-03-19T11:28:49.848903784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 19 11:28:49.850754 containerd[1468]: time="2025-03-19T11:28:49.850643814Z" level=info msg="CreateContainer within sandbox \"993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 19 11:28:49.866342 containerd[1468]: time="2025-03-19T11:28:49.866296958Z" level=info msg="CreateContainer within sandbox \"993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d1c8b13e6bbfb3ceb907fc400b314c637ce7db4c7bb4796bf706a7a9e9d21d9c\"" Mar 19 11:28:49.866840 containerd[1468]: time="2025-03-19T11:28:49.866818446Z" level=info msg="StartContainer for \"d1c8b13e6bbfb3ceb907fc400b314c637ce7db4c7bb4796bf706a7a9e9d21d9c\"" Mar 19 11:28:49.901543 systemd[1]: Started cri-containerd-d1c8b13e6bbfb3ceb907fc400b314c637ce7db4c7bb4796bf706a7a9e9d21d9c.scope - libcontainer container d1c8b13e6bbfb3ceb907fc400b314c637ce7db4c7bb4796bf706a7a9e9d21d9c. Mar 19 11:28:49.933313 kubelet[2550]: I0319 11:28:49.933244 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9cbc458fd-8dzcg" podStartSLOduration=14.952513865 podStartE2EDuration="17.933226486s" podCreationTimestamp="2025-03-19 11:28:32 +0000 UTC" firstStartedPulling="2025-03-19 11:28:45.825548865 +0000 UTC m=+26.286492764" lastFinishedPulling="2025-03-19 11:28:48.806261486 +0000 UTC m=+29.267205385" observedRunningTime="2025-03-19 11:28:49.931935264 +0000 UTC m=+30.392879163" watchObservedRunningTime="2025-03-19 11:28:49.933226486 +0000 UTC m=+30.394170345" Mar 19 11:28:49.938649 containerd[1468]: time="2025-03-19T11:28:49.937402236Z" level=info msg="StartContainer for \"d1c8b13e6bbfb3ceb907fc400b314c637ce7db4c7bb4796bf706a7a9e9d21d9c\" returns successfully" Mar 19 11:28:50.089694 containerd[1468]: time="2025-03-19T11:28:50.089650113Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:50.090066 containerd[1468]: time="2025-03-19T11:28:50.090029839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 19 11:28:50.092390 containerd[1468]: time="2025-03-19T11:28:50.092180274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 243.249129ms" Mar 19 11:28:50.092390 containerd[1468]: time="2025-03-19T11:28:50.092206875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 19 11:28:50.093086 containerd[1468]: time="2025-03-19T11:28:50.093009368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 19 11:28:50.094439 containerd[1468]: time="2025-03-19T11:28:50.094411071Z" level=info msg="CreateContainer within sandbox \"5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 19 11:28:50.106599 containerd[1468]: time="2025-03-19T11:28:50.106543628Z" level=info msg="CreateContainer within sandbox \"5243aa6f75e71b46489c76eb16bc26a9f6265570717391080fe57d3052178ea7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80941f9deb93da0f27a262d319310517687a6fffab378d164c3f38bf42bb7bb3\"" Mar 19 11:28:50.107482 containerd[1468]: time="2025-03-19T11:28:50.107309401Z" level=info msg="StartContainer for \"80941f9deb93da0f27a262d319310517687a6fffab378d164c3f38bf42bb7bb3\"" Mar 19 11:28:50.135531 systemd[1]: Started cri-containerd-80941f9deb93da0f27a262d319310517687a6fffab378d164c3f38bf42bb7bb3.scope - libcontainer container 80941f9deb93da0f27a262d319310517687a6fffab378d164c3f38bf42bb7bb3. Mar 19 11:28:50.165560 containerd[1468]: time="2025-03-19T11:28:50.165517268Z" level=info msg="StartContainer for \"80941f9deb93da0f27a262d319310517687a6fffab378d164c3f38bf42bb7bb3\" returns successfully" Mar 19 11:28:50.933796 kubelet[2550]: I0319 11:28:50.933374 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:28:50.941143 kubelet[2550]: I0319 11:28:50.941045 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-769667d9d6-grngk" podStartSLOduration=14.829698447 podStartE2EDuration="18.941029851s" podCreationTimestamp="2025-03-19 11:28:32 +0000 UTC" firstStartedPulling="2025-03-19 11:28:45.981551002 +0000 UTC m=+26.442494901" lastFinishedPulling="2025-03-19 11:28:50.092882446 +0000 UTC m=+30.553826305" observedRunningTime="2025-03-19 11:28:50.94097597 +0000 UTC m=+31.401919869" watchObservedRunningTime="2025-03-19 11:28:50.941029851 +0000 UTC m=+31.401973750" Mar 19 11:28:51.223226 containerd[1468]: time="2025-03-19T11:28:51.218839455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:51.223226 containerd[1468]: time="2025-03-19T11:28:51.220222517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 19 11:28:51.223226 containerd[1468]: time="2025-03-19T11:28:51.221109171Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:51.224652 containerd[1468]: time="2025-03-19T11:28:51.224604746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:28:51.225407 containerd[1468]: time="2025-03-19T11:28:51.225374238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.132158506s" Mar 19 11:28:51.225407 containerd[1468]: time="2025-03-19T11:28:51.225407438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 19 11:28:51.227505 containerd[1468]: time="2025-03-19T11:28:51.227316708Z" level=info msg="CreateContainer within sandbox \"993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 19 11:28:51.240660 containerd[1468]: time="2025-03-19T11:28:51.240613517Z" level=info msg="CreateContainer within sandbox \"993567f3aeab5ccbec3facb9e8df22c62b82197fe4e62dad795580c9b1e6e03d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a7019f1c36834d55c07a6e49b896892fca5134857f3e14f5a3187beb0b878ac5\"" Mar 19 11:28:51.241740 containerd[1468]: time="2025-03-19T11:28:51.241466011Z" level=info msg="StartContainer for \"a7019f1c36834d55c07a6e49b896892fca5134857f3e14f5a3187beb0b878ac5\"" Mar 19 11:28:51.269540 systemd[1]: Started cri-containerd-a7019f1c36834d55c07a6e49b896892fca5134857f3e14f5a3187beb0b878ac5.scope - libcontainer container a7019f1c36834d55c07a6e49b896892fca5134857f3e14f5a3187beb0b878ac5. Mar 19 11:28:51.345107 containerd[1468]: time="2025-03-19T11:28:51.345055761Z" level=info msg="StartContainer for \"a7019f1c36834d55c07a6e49b896892fca5134857f3e14f5a3187beb0b878ac5\" returns successfully" Mar 19 11:28:51.698988 kubelet[2550]: I0319 11:28:51.698733 2550 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 19 11:28:51.698988 kubelet[2550]: I0319 11:28:51.698823 2550 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 19 11:28:51.938477 kubelet[2550]: I0319 11:28:51.938447 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:28:51.949787 kubelet[2550]: I0319 11:28:51.949657 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p89jv" podStartSLOduration=14.590118212 podStartE2EDuration="19.949643112s" podCreationTimestamp="2025-03-19 11:28:32 +0000 UTC" firstStartedPulling="2025-03-19 11:28:45.866572429 +0000 UTC m=+26.327516328" lastFinishedPulling="2025-03-19 11:28:51.226097329 +0000 UTC m=+31.687041228" observedRunningTime="2025-03-19 11:28:51.948411693 +0000 UTC m=+32.409355592" watchObservedRunningTime="2025-03-19 11:28:51.949643112 +0000 UTC m=+32.410587011" Mar 19 11:28:54.036151 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:43866.service - OpenSSH per-connection server daemon (10.0.0.1:43866). Mar 19 11:28:54.092200 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 43866 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:28:54.093569 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:28:54.097419 systemd-logind[1451]: New session 9 of user core. Mar 19 11:28:54.111528 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:28:54.330451 sshd[5316]: Connection closed by 10.0.0.1 port 43866 Mar 19 11:28:54.330310 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Mar 19 11:28:54.333734 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:43866.service: Deactivated successfully. Mar 19 11:28:54.335402 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:28:54.335989 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:28:54.336743 systemd-logind[1451]: Removed session 9. Mar 19 11:28:58.114134 systemd[1]: run-containerd-runc-k8s.io-b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87-runc.WfxZgV.mount: Deactivated successfully. Mar 19 11:28:59.345106 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:43882.service - OpenSSH per-connection server daemon (10.0.0.1:43882). Mar 19 11:28:59.387905 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 43882 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:28:59.388980 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:28:59.393052 systemd-logind[1451]: New session 10 of user core. Mar 19 11:28:59.404507 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:28:59.532787 sshd[5369]: Connection closed by 10.0.0.1 port 43882 Mar 19 11:28:59.533298 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Mar 19 11:28:59.545407 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:43882.service: Deactivated successfully. Mar 19 11:28:59.547994 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:28:59.549670 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:28:59.559681 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:43898.service - OpenSSH per-connection server daemon (10.0.0.1:43898). Mar 19 11:28:59.560965 systemd-logind[1451]: Removed session 10. Mar 19 11:28:59.598883 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 43898 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:28:59.599692 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:28:59.603146 systemd-logind[1451]: New session 11 of user core. Mar 19 11:28:59.612516 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:28:59.822758 sshd[5387]: Connection closed by 10.0.0.1 port 43898 Mar 19 11:28:59.823510 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Mar 19 11:28:59.834454 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:43898.service: Deactivated successfully. Mar 19 11:28:59.836571 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:28:59.837502 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:28:59.847736 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:43900.service - OpenSSH per-connection server daemon (10.0.0.1:43900). Mar 19 11:28:59.848838 systemd-logind[1451]: Removed session 11. Mar 19 11:28:59.889118 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 43900 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:28:59.890398 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:28:59.894083 systemd-logind[1451]: New session 12 of user core. Mar 19 11:28:59.905545 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:29:00.074168 sshd[5402]: Connection closed by 10.0.0.1 port 43900 Mar 19 11:29:00.075573 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:00.080189 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:43900.service: Deactivated successfully. Mar 19 11:29:00.082263 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:29:00.082885 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:29:00.083979 systemd-logind[1451]: Removed session 12. Mar 19 11:29:05.093117 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:37270.service - OpenSSH per-connection server daemon (10.0.0.1:37270). Mar 19 11:29:05.141114 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 37270 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:05.142389 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:05.147458 systemd-logind[1451]: New session 13 of user core. Mar 19 11:29:05.153552 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:29:05.307134 sshd[5418]: Connection closed by 10.0.0.1 port 37270 Mar 19 11:29:05.307579 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:05.317653 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:37270.service: Deactivated successfully. Mar 19 11:29:05.319279 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:29:05.320953 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:29:05.324908 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:37276.service - OpenSSH per-connection server daemon (10.0.0.1:37276). Mar 19 11:29:05.326333 systemd-logind[1451]: Removed session 13. Mar 19 11:29:05.369581 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 37276 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:05.371430 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:05.375461 systemd-logind[1451]: New session 14 of user core. Mar 19 11:29:05.385512 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:29:05.591575 sshd[5434]: Connection closed by 10.0.0.1 port 37276 Mar 19 11:29:05.591927 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:05.605976 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:37276.service: Deactivated successfully. Mar 19 11:29:05.607664 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:29:05.608405 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:29:05.622622 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:37286.service - OpenSSH per-connection server daemon (10.0.0.1:37286). Mar 19 11:29:05.623623 systemd-logind[1451]: Removed session 14. Mar 19 11:29:05.669086 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 37286 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:05.670422 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:05.674432 systemd-logind[1451]: New session 15 of user core. Mar 19 11:29:05.684511 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:29:07.119045 sshd[5447]: Connection closed by 10.0.0.1 port 37286 Mar 19 11:29:07.119540 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:07.138678 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:37292.service - OpenSSH per-connection server daemon (10.0.0.1:37292). Mar 19 11:29:07.139981 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:37286.service: Deactivated successfully. Mar 19 11:29:07.142310 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:29:07.142756 systemd[1]: session-15.scope: Consumed 493ms CPU time, 67.5M memory peak. Mar 19 11:29:07.146164 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:29:07.149213 systemd-logind[1451]: Removed session 15. Mar 19 11:29:07.191173 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 37292 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:07.192468 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:07.196538 systemd-logind[1451]: New session 16 of user core. Mar 19 11:29:07.214499 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:29:07.540136 sshd[5479]: Connection closed by 10.0.0.1 port 37292 Mar 19 11:29:07.541139 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:07.552665 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:37292.service: Deactivated successfully. Mar 19 11:29:07.554223 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:29:07.555497 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:29:07.563618 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Mar 19 11:29:07.564645 systemd-logind[1451]: Removed session 16. Mar 19 11:29:07.603934 sshd[5489]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:07.605418 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:07.609411 systemd-logind[1451]: New session 17 of user core. Mar 19 11:29:07.619496 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:29:07.760393 sshd[5492]: Connection closed by 10.0.0.1 port 37294 Mar 19 11:29:07.760908 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:07.765008 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:29:07.765268 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:37294.service: Deactivated successfully. Mar 19 11:29:07.767092 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:29:07.768431 systemd-logind[1451]: Removed session 17. Mar 19 11:29:09.076456 kubelet[2550]: I0319 11:29:09.076405 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:29:09.094818 systemd[1]: run-containerd-runc-k8s.io-b3e1e07a56bd269aec83e94228a224543240e52b204c89976de50f744e1a8e87-runc.LcTM9b.mount: Deactivated successfully. Mar 19 11:29:09.412664 kubelet[2550]: I0319 11:29:09.412549 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:29:12.772773 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:33446.service - OpenSSH per-connection server daemon (10.0.0.1:33446). Mar 19 11:29:12.819075 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 33446 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:12.820318 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:12.824681 systemd-logind[1451]: New session 18 of user core. Mar 19 11:29:12.834525 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:29:12.988144 sshd[5551]: Connection closed by 10.0.0.1 port 33446 Mar 19 11:29:12.988513 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:12.992948 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:33446.service: Deactivated successfully. Mar 19 11:29:12.995900 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:29:12.996542 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:29:12.997344 systemd-logind[1451]: Removed session 18. Mar 19 11:29:18.000986 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Mar 19 11:29:18.064859 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:18.065468 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:18.069828 systemd-logind[1451]: New session 19 of user core. Mar 19 11:29:18.076583 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:29:18.210144 sshd[5593]: Connection closed by 10.0.0.1 port 33454 Mar 19 11:29:18.210506 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:18.214659 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:33454.service: Deactivated successfully. Mar 19 11:29:18.216446 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:29:18.218731 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:29:18.219894 systemd-logind[1451]: Removed session 19. Mar 19 11:29:19.602824 containerd[1468]: time="2025-03-19T11:29:19.602786159Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:29:19.603204 containerd[1468]: time="2025-03-19T11:29:19.602895920Z" level=info msg="TearDown network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" successfully" Mar 19 11:29:19.603204 containerd[1468]: time="2025-03-19T11:29:19.602906600Z" level=info msg="StopPodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" returns successfully" Mar 19 11:29:19.603709 containerd[1468]: time="2025-03-19T11:29:19.603676847Z" level=info msg="RemovePodSandbox for \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:29:19.609771 containerd[1468]: time="2025-03-19T11:29:19.609718101Z" level=info msg="Forcibly stopping sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\"" Mar 19 11:29:19.621350 containerd[1468]: time="2025-03-19T11:29:19.621302164Z" level=info msg="TearDown network for sandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" successfully" Mar 19 11:29:19.634349 containerd[1468]: time="2025-03-19T11:29:19.634286840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.634478 containerd[1468]: time="2025-03-19T11:29:19.634420521Z" level=info msg="RemovePodSandbox \"fb7ab620c53126154183875a3d589cc55b06aa6acafc9d91fb71e1b0f24ba8dd\" returns successfully" Mar 19 11:29:19.635091 containerd[1468]: time="2025-03-19T11:29:19.635004966Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" Mar 19 11:29:19.635186 containerd[1468]: time="2025-03-19T11:29:19.635170607Z" level=info msg="TearDown network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" successfully" Mar 19 11:29:19.635214 containerd[1468]: time="2025-03-19T11:29:19.635184848Z" level=info msg="StopPodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" returns successfully" Mar 19 11:29:19.635488 containerd[1468]: time="2025-03-19T11:29:19.635464970Z" level=info msg="RemovePodSandbox for \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" Mar 19 11:29:19.635488 containerd[1468]: time="2025-03-19T11:29:19.635489290Z" level=info msg="Forcibly stopping sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\"" Mar 19 11:29:19.635590 containerd[1468]: time="2025-03-19T11:29:19.635575691Z" level=info msg="TearDown network for sandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" successfully" Mar 19 11:29:19.652945 containerd[1468]: time="2025-03-19T11:29:19.652892885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.653070 containerd[1468]: time="2025-03-19T11:29:19.652966566Z" level=info msg="RemovePodSandbox \"3a1a2148624e766793cc0e91adbd5570691fd4e55e76329cd48dd565f7b20f15\" returns successfully" Mar 19 11:29:19.653604 containerd[1468]: time="2025-03-19T11:29:19.653568171Z" level=info msg="StopPodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\"" Mar 19 11:29:19.653693 containerd[1468]: time="2025-03-19T11:29:19.653675452Z" level=info msg="TearDown network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" successfully" Mar 19 11:29:19.653693 containerd[1468]: time="2025-03-19T11:29:19.653691252Z" level=info msg="StopPodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" returns successfully" Mar 19 11:29:19.654208 containerd[1468]: time="2025-03-19T11:29:19.654186737Z" level=info msg="RemovePodSandbox for \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\"" Mar 19 11:29:19.654258 containerd[1468]: time="2025-03-19T11:29:19.654212257Z" level=info msg="Forcibly stopping sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\"" Mar 19 11:29:19.654298 containerd[1468]: time="2025-03-19T11:29:19.654283018Z" level=info msg="TearDown network for sandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" successfully" Mar 19 11:29:19.657645 containerd[1468]: time="2025-03-19T11:29:19.657604447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.657714 containerd[1468]: time="2025-03-19T11:29:19.657670528Z" level=info msg="RemovePodSandbox \"6e5171383605407747e172140d309b1ad95cefcf49cb2a515c23847719c1d0ee\" returns successfully" Mar 19 11:29:19.658038 containerd[1468]: time="2025-03-19T11:29:19.658010771Z" level=info msg="StopPodSandbox for \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\"" Mar 19 11:29:19.658122 containerd[1468]: time="2025-03-19T11:29:19.658105972Z" level=info msg="TearDown network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\" successfully" Mar 19 11:29:19.658153 containerd[1468]: time="2025-03-19T11:29:19.658120892Z" level=info msg="StopPodSandbox for \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\" returns successfully" Mar 19 11:29:19.658551 containerd[1468]: time="2025-03-19T11:29:19.658523215Z" level=info msg="RemovePodSandbox for \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\"" Mar 19 11:29:19.658622 containerd[1468]: time="2025-03-19T11:29:19.658555776Z" level=info msg="Forcibly stopping sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\"" Mar 19 11:29:19.658648 containerd[1468]: time="2025-03-19T11:29:19.658616816Z" level=info msg="TearDown network for sandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\" successfully" Mar 19 11:29:19.661464 containerd[1468]: time="2025-03-19T11:29:19.661428081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.661547 containerd[1468]: time="2025-03-19T11:29:19.661488362Z" level=info msg="RemovePodSandbox \"172655014c146b1c4a822dca44a4c4668ab48ea8ebe406742055ee5a6ad3523e\" returns successfully" Mar 19 11:29:19.662087 containerd[1468]: time="2025-03-19T11:29:19.662060487Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:29:19.662165 containerd[1468]: time="2025-03-19T11:29:19.662148088Z" level=info msg="TearDown network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" successfully" Mar 19 11:29:19.662200 containerd[1468]: time="2025-03-19T11:29:19.662163928Z" level=info msg="StopPodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" returns successfully" Mar 19 11:29:19.662767 containerd[1468]: time="2025-03-19T11:29:19.662744533Z" level=info msg="RemovePodSandbox for \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:29:19.662767 containerd[1468]: time="2025-03-19T11:29:19.662767773Z" level=info msg="Forcibly stopping sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\"" Mar 19 11:29:19.662833 containerd[1468]: time="2025-03-19T11:29:19.662824414Z" level=info msg="TearDown network for sandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" successfully" Mar 19 11:29:19.666180 containerd[1468]: time="2025-03-19T11:29:19.665965241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.666180 containerd[1468]: time="2025-03-19T11:29:19.666017002Z" level=info msg="RemovePodSandbox \"b8a83f6af16990278913123c0bef9185506dbc2c752d2e57d929f815d38893b5\" returns successfully" Mar 19 11:29:19.673872 containerd[1468]: time="2025-03-19T11:29:19.673826391Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" Mar 19 11:29:19.673984 containerd[1468]: time="2025-03-19T11:29:19.673936832Z" level=info msg="TearDown network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" successfully" Mar 19 11:29:19.673984 containerd[1468]: time="2025-03-19T11:29:19.673947113Z" level=info msg="StopPodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" returns successfully" Mar 19 11:29:19.675087 containerd[1468]: time="2025-03-19T11:29:19.674965922Z" level=info msg="RemovePodSandbox for \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" Mar 19 11:29:19.675087 containerd[1468]: time="2025-03-19T11:29:19.674992482Z" level=info msg="Forcibly stopping sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\"" Mar 19 11:29:19.675087 containerd[1468]: time="2025-03-19T11:29:19.675063202Z" level=info msg="TearDown network for sandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" successfully" Mar 19 11:29:19.677884 containerd[1468]: time="2025-03-19T11:29:19.677833627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.677953 containerd[1468]: time="2025-03-19T11:29:19.677893828Z" level=info msg="RemovePodSandbox \"388b196cb9275cf602d70d7ea70921ddbfed084d77116d52ca7db2ca43d002ff\" returns successfully" Mar 19 11:29:19.678560 containerd[1468]: time="2025-03-19T11:29:19.678254591Z" level=info msg="StopPodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\"" Mar 19 11:29:19.678560 containerd[1468]: time="2025-03-19T11:29:19.678354832Z" level=info msg="TearDown network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" successfully" Mar 19 11:29:19.678560 containerd[1468]: time="2025-03-19T11:29:19.678390592Z" level=info msg="StopPodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" returns successfully" Mar 19 11:29:19.678839 containerd[1468]: time="2025-03-19T11:29:19.678794396Z" level=info msg="RemovePodSandbox for \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\"" Mar 19 11:29:19.678839 containerd[1468]: time="2025-03-19T11:29:19.678826556Z" level=info msg="Forcibly stopping sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\"" Mar 19 11:29:19.678907 containerd[1468]: time="2025-03-19T11:29:19.678895197Z" level=info msg="TearDown network for sandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" successfully" Mar 19 11:29:19.681412 containerd[1468]: time="2025-03-19T11:29:19.681342138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.681512 containerd[1468]: time="2025-03-19T11:29:19.681423939Z" level=info msg="RemovePodSandbox \"5099d1b5ae41356b10146ca2216e032ef368c589e200f76e1df6defc2c2ef921\" returns successfully" Mar 19 11:29:19.682095 containerd[1468]: time="2025-03-19T11:29:19.681813943Z" level=info msg="StopPodSandbox for \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\"" Mar 19 11:29:19.682095 containerd[1468]: time="2025-03-19T11:29:19.681900863Z" level=info msg="TearDown network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\" successfully" Mar 19 11:29:19.682095 containerd[1468]: time="2025-03-19T11:29:19.681910983Z" level=info msg="StopPodSandbox for \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\" returns successfully" Mar 19 11:29:19.682381 containerd[1468]: time="2025-03-19T11:29:19.682331787Z" level=info msg="RemovePodSandbox for \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\"" Mar 19 11:29:19.682381 containerd[1468]: time="2025-03-19T11:29:19.682370667Z" level=info msg="Forcibly stopping sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\"" Mar 19 11:29:19.682450 containerd[1468]: time="2025-03-19T11:29:19.682439468Z" level=info msg="TearDown network for sandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\" successfully" Mar 19 11:29:19.685043 containerd[1468]: time="2025-03-19T11:29:19.684994691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.685089 containerd[1468]: time="2025-03-19T11:29:19.685066171Z" level=info msg="RemovePodSandbox \"1647ae3a5e27097826278991df231464698303f33b56a6db3be92caa4e07a5ad\" returns successfully" Mar 19 11:29:19.685448 containerd[1468]: time="2025-03-19T11:29:19.685411535Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:29:19.685529 containerd[1468]: time="2025-03-19T11:29:19.685513175Z" level=info msg="TearDown network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" successfully" Mar 19 11:29:19.685529 containerd[1468]: time="2025-03-19T11:29:19.685525736Z" level=info msg="StopPodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" returns successfully" Mar 19 11:29:19.685906 containerd[1468]: time="2025-03-19T11:29:19.685858579Z" level=info msg="RemovePodSandbox for \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:29:19.685951 containerd[1468]: time="2025-03-19T11:29:19.685908899Z" level=info msg="Forcibly stopping sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\"" Mar 19 11:29:19.685991 containerd[1468]: time="2025-03-19T11:29:19.685973660Z" level=info msg="TearDown network for sandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" successfully" Mar 19 11:29:19.688727 containerd[1468]: time="2025-03-19T11:29:19.688653883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.688804 containerd[1468]: time="2025-03-19T11:29:19.688786165Z" level=info msg="RemovePodSandbox \"97762173def12074aaa2316f102efa9d32023260711ace31796875e11a8b6a17\" returns successfully" Mar 19 11:29:19.689504 containerd[1468]: time="2025-03-19T11:29:19.689328289Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" Mar 19 11:29:19.689504 containerd[1468]: time="2025-03-19T11:29:19.689435450Z" level=info msg="TearDown network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" successfully" Mar 19 11:29:19.689504 containerd[1468]: time="2025-03-19T11:29:19.689448210Z" level=info msg="StopPodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" returns successfully" Mar 19 11:29:19.691417 containerd[1468]: time="2025-03-19T11:29:19.690091376Z" level=info msg="RemovePodSandbox for \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" Mar 19 11:29:19.691417 containerd[1468]: time="2025-03-19T11:29:19.690127976Z" level=info msg="Forcibly stopping sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\"" Mar 19 11:29:19.691417 containerd[1468]: time="2025-03-19T11:29:19.690195097Z" level=info msg="TearDown network for sandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" successfully" Mar 19 11:29:19.693077 containerd[1468]: time="2025-03-19T11:29:19.693004682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.693151 containerd[1468]: time="2025-03-19T11:29:19.693130203Z" level=info msg="RemovePodSandbox \"37731a9152e224fb817eadf6be0eeed0bc259a71425ebac2c408e65ad4559c88\" returns successfully" Mar 19 11:29:19.693474 containerd[1468]: time="2025-03-19T11:29:19.693445966Z" level=info msg="StopPodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\"" Mar 19 11:29:19.694407 containerd[1468]: time="2025-03-19T11:29:19.693534047Z" level=info msg="TearDown network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" successfully" Mar 19 11:29:19.694407 containerd[1468]: time="2025-03-19T11:29:19.693547727Z" level=info msg="StopPodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" returns successfully" Mar 19 11:29:19.694921 containerd[1468]: time="2025-03-19T11:29:19.694887179Z" level=info msg="RemovePodSandbox for \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\"" Mar 19 11:29:19.694921 containerd[1468]: time="2025-03-19T11:29:19.694916979Z" level=info msg="Forcibly stopping sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\"" Mar 19 11:29:19.694993 containerd[1468]: time="2025-03-19T11:29:19.694983700Z" level=info msg="TearDown network for sandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" successfully" Mar 19 11:29:19.698803 containerd[1468]: time="2025-03-19T11:29:19.698763853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.698897 containerd[1468]: time="2025-03-19T11:29:19.698879134Z" level=info msg="RemovePodSandbox \"0dc01f58f7f78791628168990a58130640b0b4af25d31e5affb07715bc57bd44\" returns successfully" Mar 19 11:29:19.699328 containerd[1468]: time="2025-03-19T11:29:19.699306298Z" level=info msg="StopPodSandbox for \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\"" Mar 19 11:29:19.699473 containerd[1468]: time="2025-03-19T11:29:19.699453539Z" level=info msg="TearDown network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\" successfully" Mar 19 11:29:19.699501 containerd[1468]: time="2025-03-19T11:29:19.699472940Z" level=info msg="StopPodSandbox for \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\" returns successfully" Mar 19 11:29:19.699825 containerd[1468]: time="2025-03-19T11:29:19.699791582Z" level=info msg="RemovePodSandbox for \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\"" Mar 19 11:29:19.699825 containerd[1468]: time="2025-03-19T11:29:19.699821663Z" level=info msg="Forcibly stopping sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\"" Mar 19 11:29:19.699895 containerd[1468]: time="2025-03-19T11:29:19.699880143Z" level=info msg="TearDown network for sandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\" successfully" Mar 19 11:29:19.702491 containerd[1468]: time="2025-03-19T11:29:19.702442766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.702555 containerd[1468]: time="2025-03-19T11:29:19.702504047Z" level=info msg="RemovePodSandbox \"80efed6ea04fa7152d417b3aa9248327f4b54bceab80cc9284e8201c25a71f69\" returns successfully" Mar 19 11:29:19.702850 containerd[1468]: time="2025-03-19T11:29:19.702819729Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:29:19.702928 containerd[1468]: time="2025-03-19T11:29:19.702910170Z" level=info msg="TearDown network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" successfully" Mar 19 11:29:19.702958 containerd[1468]: time="2025-03-19T11:29:19.702926730Z" level=info msg="StopPodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" returns successfully" Mar 19 11:29:19.703190 containerd[1468]: time="2025-03-19T11:29:19.703170933Z" level=info msg="RemovePodSandbox for \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:29:19.703234 containerd[1468]: time="2025-03-19T11:29:19.703195573Z" level=info msg="Forcibly stopping sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\"" Mar 19 11:29:19.703261 containerd[1468]: time="2025-03-19T11:29:19.703251693Z" level=info msg="TearDown network for sandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" successfully" Mar 19 11:29:19.705971 containerd[1468]: time="2025-03-19T11:29:19.705876037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.706088 containerd[1468]: time="2025-03-19T11:29:19.706052438Z" level=info msg="RemovePodSandbox \"5f39354bd67cb8308750a42c42c9a6b0ae0938814d734f7dd79fc39fb1b52c96\" returns successfully" Mar 19 11:29:19.706642 containerd[1468]: time="2025-03-19T11:29:19.706619723Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" Mar 19 11:29:19.706720 containerd[1468]: time="2025-03-19T11:29:19.706703684Z" level=info msg="TearDown network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" successfully" Mar 19 11:29:19.706745 containerd[1468]: time="2025-03-19T11:29:19.706718404Z" level=info msg="StopPodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" returns successfully" Mar 19 11:29:19.706996 containerd[1468]: time="2025-03-19T11:29:19.706971566Z" level=info msg="RemovePodSandbox for \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" Mar 19 11:29:19.707055 containerd[1468]: time="2025-03-19T11:29:19.707042367Z" level=info msg="Forcibly stopping sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\"" Mar 19 11:29:19.707119 containerd[1468]: time="2025-03-19T11:29:19.707102128Z" level=info msg="TearDown network for sandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" successfully" Mar 19 11:29:19.709973 containerd[1468]: time="2025-03-19T11:29:19.709935833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.710042 containerd[1468]: time="2025-03-19T11:29:19.709992833Z" level=info msg="RemovePodSandbox \"1501881875b20205e5bd1bc21985c42c179bda14f6c0bcdb3566a343a723e3aa\" returns successfully" Mar 19 11:29:19.710345 containerd[1468]: time="2025-03-19T11:29:19.710315836Z" level=info msg="StopPodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\"" Mar 19 11:29:19.710435 containerd[1468]: time="2025-03-19T11:29:19.710418237Z" level=info msg="TearDown network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" successfully" Mar 19 11:29:19.710484 containerd[1468]: time="2025-03-19T11:29:19.710434397Z" level=info msg="StopPodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" returns successfully" Mar 19 11:29:19.712098 containerd[1468]: time="2025-03-19T11:29:19.710780960Z" level=info msg="RemovePodSandbox for \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\"" Mar 19 11:29:19.712098 containerd[1468]: time="2025-03-19T11:29:19.710811921Z" level=info msg="Forcibly stopping sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\"" Mar 19 11:29:19.712098 containerd[1468]: time="2025-03-19T11:29:19.710872241Z" level=info msg="TearDown network for sandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" successfully" Mar 19 11:29:19.713341 containerd[1468]: time="2025-03-19T11:29:19.713310423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.713512 containerd[1468]: time="2025-03-19T11:29:19.713492624Z" level=info msg="RemovePodSandbox \"adf061dddc4880543f349995c33c68b79507cb557669952913325f5e861609b5\" returns successfully" Mar 19 11:29:19.713989 containerd[1468]: time="2025-03-19T11:29:19.713962229Z" level=info msg="StopPodSandbox for \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\"" Mar 19 11:29:19.714077 containerd[1468]: time="2025-03-19T11:29:19.714057549Z" level=info msg="TearDown network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\" successfully" Mar 19 11:29:19.714077 containerd[1468]: time="2025-03-19T11:29:19.714073790Z" level=info msg="StopPodSandbox for \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\" returns successfully" Mar 19 11:29:19.724248 containerd[1468]: time="2025-03-19T11:29:19.724191920Z" level=info msg="RemovePodSandbox for \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\"" Mar 19 11:29:19.724248 containerd[1468]: time="2025-03-19T11:29:19.724246560Z" level=info msg="Forcibly stopping sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\"" Mar 19 11:29:19.724379 containerd[1468]: time="2025-03-19T11:29:19.724335561Z" level=info msg="TearDown network for sandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\" successfully" Mar 19 11:29:19.727018 containerd[1468]: time="2025-03-19T11:29:19.726975544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.727080 containerd[1468]: time="2025-03-19T11:29:19.727048105Z" level=info msg="RemovePodSandbox \"4546ba170875087003c97d2885541fa2c80ee25774b4ef314dcbc4b5f1377367\" returns successfully" Mar 19 11:29:19.727532 containerd[1468]: time="2025-03-19T11:29:19.727504709Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:29:19.727807 containerd[1468]: time="2025-03-19T11:29:19.727699951Z" level=info msg="TearDown network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" successfully" Mar 19 11:29:19.727807 containerd[1468]: time="2025-03-19T11:29:19.727718071Z" level=info msg="StopPodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" returns successfully" Mar 19 11:29:19.728092 containerd[1468]: time="2025-03-19T11:29:19.728067634Z" level=info msg="RemovePodSandbox for \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:29:19.728131 containerd[1468]: time="2025-03-19T11:29:19.728097874Z" level=info msg="Forcibly stopping sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\"" Mar 19 11:29:19.728173 containerd[1468]: time="2025-03-19T11:29:19.728161675Z" level=info msg="TearDown network for sandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" successfully" Mar 19 11:29:19.730830 containerd[1468]: time="2025-03-19T11:29:19.730795818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.730830 containerd[1468]: time="2025-03-19T11:29:19.730857139Z" level=info msg="RemovePodSandbox \"840c6cae7a8aa2bd07956b7dc20c5f0f8e46a172ffc859d0b71d113987506a43\" returns successfully" Mar 19 11:29:19.731564 containerd[1468]: time="2025-03-19T11:29:19.731406424Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" Mar 19 11:29:19.731564 containerd[1468]: time="2025-03-19T11:29:19.731502505Z" level=info msg="TearDown network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" successfully" Mar 19 11:29:19.731564 containerd[1468]: time="2025-03-19T11:29:19.731512985Z" level=info msg="StopPodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" returns successfully" Mar 19 11:29:19.732067 containerd[1468]: time="2025-03-19T11:29:19.731929428Z" level=info msg="RemovePodSandbox for \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" Mar 19 11:29:19.732067 containerd[1468]: time="2025-03-19T11:29:19.731957349Z" level=info msg="Forcibly stopping sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\"" Mar 19 11:29:19.732067 containerd[1468]: time="2025-03-19T11:29:19.732025029Z" level=info msg="TearDown network for sandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" successfully" Mar 19 11:29:19.734526 containerd[1468]: time="2025-03-19T11:29:19.734455171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.734526 containerd[1468]: time="2025-03-19T11:29:19.734519092Z" level=info msg="RemovePodSandbox \"5baabcd003af295ac4f79d622247fa9ca5dedbd891f249fb298309e14e033911\" returns successfully" Mar 19 11:29:19.735241 containerd[1468]: time="2025-03-19T11:29:19.735046536Z" level=info msg="StopPodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\"" Mar 19 11:29:19.735241 containerd[1468]: time="2025-03-19T11:29:19.735147777Z" level=info msg="TearDown network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" successfully" Mar 19 11:29:19.735241 containerd[1468]: time="2025-03-19T11:29:19.735158497Z" level=info msg="StopPodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" returns successfully" Mar 19 11:29:19.735726 containerd[1468]: time="2025-03-19T11:29:19.735700902Z" level=info msg="RemovePodSandbox for \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\"" Mar 19 11:29:19.735813 containerd[1468]: time="2025-03-19T11:29:19.735751302Z" level=info msg="Forcibly stopping sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\"" Mar 19 11:29:19.735854 containerd[1468]: time="2025-03-19T11:29:19.735822063Z" level=info msg="TearDown network for sandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" successfully" Mar 19 11:29:19.742941 containerd[1468]: time="2025-03-19T11:29:19.742897126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.743051 containerd[1468]: time="2025-03-19T11:29:19.742970247Z" level=info msg="RemovePodSandbox \"00d5a48bf032b9df93ad67ca77c99a36cd9432b27cd88880a243a23404b048ca\" returns successfully" Mar 19 11:29:19.743658 containerd[1468]: time="2025-03-19T11:29:19.743549412Z" level=info msg="StopPodSandbox for \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\"" Mar 19 11:29:19.743746 containerd[1468]: time="2025-03-19T11:29:19.743660053Z" level=info msg="TearDown network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\" successfully" Mar 19 11:29:19.743746 containerd[1468]: time="2025-03-19T11:29:19.743672573Z" level=info msg="StopPodSandbox for \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\" returns successfully" Mar 19 11:29:19.745383 containerd[1468]: time="2025-03-19T11:29:19.744146297Z" level=info msg="RemovePodSandbox for \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\"" Mar 19 11:29:19.745383 containerd[1468]: time="2025-03-19T11:29:19.744175937Z" level=info msg="Forcibly stopping sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\"" Mar 19 11:29:19.745383 containerd[1468]: time="2025-03-19T11:29:19.744300859Z" level=info msg="TearDown network for sandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\" successfully" Mar 19 11:29:19.755431 containerd[1468]: time="2025-03-19T11:29:19.755378837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.755567 containerd[1468]: time="2025-03-19T11:29:19.755445038Z" level=info msg="RemovePodSandbox \"142ac999424a3bda935ff680065e5cf80b76f24d0d72466993753ae213f26e27\" returns successfully" Mar 19 11:29:19.756210 containerd[1468]: time="2025-03-19T11:29:19.756061363Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:29:19.756210 containerd[1468]: time="2025-03-19T11:29:19.756161084Z" level=info msg="TearDown network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" successfully" Mar 19 11:29:19.756210 containerd[1468]: time="2025-03-19T11:29:19.756171844Z" level=info msg="StopPodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" returns successfully" Mar 19 11:29:19.756755 containerd[1468]: time="2025-03-19T11:29:19.756715649Z" level=info msg="RemovePodSandbox for \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:29:19.756755 containerd[1468]: time="2025-03-19T11:29:19.756744049Z" level=info msg="Forcibly stopping sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\"" Mar 19 11:29:19.756883 containerd[1468]: time="2025-03-19T11:29:19.756810930Z" level=info msg="TearDown network for sandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" successfully" Mar 19 11:29:19.759284 containerd[1468]: time="2025-03-19T11:29:19.759242352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.759345 containerd[1468]: time="2025-03-19T11:29:19.759314512Z" level=info msg="RemovePodSandbox \"eb978401b215d3807382b20ab3beecaaa073f8b5598d355bb0c67c4335cb2fa3\" returns successfully" Mar 19 11:29:19.759894 containerd[1468]: time="2025-03-19T11:29:19.759766396Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" Mar 19 11:29:19.760142 containerd[1468]: time="2025-03-19T11:29:19.760119559Z" level=info msg="TearDown network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" successfully" Mar 19 11:29:19.760227 containerd[1468]: time="2025-03-19T11:29:19.760212120Z" level=info msg="StopPodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" returns successfully" Mar 19 11:29:19.760856 containerd[1468]: time="2025-03-19T11:29:19.760703245Z" level=info msg="RemovePodSandbox for \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" Mar 19 11:29:19.760856 containerd[1468]: time="2025-03-19T11:29:19.760734685Z" level=info msg="Forcibly stopping sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\"" Mar 19 11:29:19.760856 containerd[1468]: time="2025-03-19T11:29:19.760805645Z" level=info msg="TearDown network for sandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" successfully" Mar 19 11:29:19.763613 containerd[1468]: time="2025-03-19T11:29:19.763556230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.763685 containerd[1468]: time="2025-03-19T11:29:19.763621630Z" level=info msg="RemovePodSandbox \"eaa27dd7f3e57f333b711dd0a61ae031693a94592f082b7d9ce4eb9e0f06b434\" returns successfully" Mar 19 11:29:19.764210 containerd[1468]: time="2025-03-19T11:29:19.764028994Z" level=info msg="StopPodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\"" Mar 19 11:29:19.764210 containerd[1468]: time="2025-03-19T11:29:19.764139235Z" level=info msg="TearDown network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" successfully" Mar 19 11:29:19.764210 containerd[1468]: time="2025-03-19T11:29:19.764150435Z" level=info msg="StopPodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" returns successfully" Mar 19 11:29:19.764505 containerd[1468]: time="2025-03-19T11:29:19.764478638Z" level=info msg="RemovePodSandbox for \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\"" Mar 19 11:29:19.764552 containerd[1468]: time="2025-03-19T11:29:19.764511118Z" level=info msg="Forcibly stopping sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\"" Mar 19 11:29:19.764593 containerd[1468]: time="2025-03-19T11:29:19.764578679Z" level=info msg="TearDown network for sandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" successfully" Mar 19 11:29:19.767502 containerd[1468]: time="2025-03-19T11:29:19.767454465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.767603 containerd[1468]: time="2025-03-19T11:29:19.767525905Z" level=info msg="RemovePodSandbox \"4670cd55068257d9908c0a3770944f77dcc0d53a8dc42c82403390610302f6cc\" returns successfully" Mar 19 11:29:19.767891 containerd[1468]: time="2025-03-19T11:29:19.767865748Z" level=info msg="StopPodSandbox for \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\"" Mar 19 11:29:19.767976 containerd[1468]: time="2025-03-19T11:29:19.767960669Z" level=info msg="TearDown network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\" successfully" Mar 19 11:29:19.768009 containerd[1468]: time="2025-03-19T11:29:19.767974309Z" level=info msg="StopPodSandbox for \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\" returns successfully" Mar 19 11:29:19.768315 containerd[1468]: time="2025-03-19T11:29:19.768288152Z" level=info msg="RemovePodSandbox for \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\"" Mar 19 11:29:19.768382 containerd[1468]: time="2025-03-19T11:29:19.768322992Z" level=info msg="Forcibly stopping sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\"" Mar 19 11:29:19.768572 containerd[1468]: time="2025-03-19T11:29:19.768539074Z" level=info msg="TearDown network for sandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\" successfully" Mar 19 11:29:19.771446 containerd[1468]: time="2025-03-19T11:29:19.771402340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:29:19.771533 containerd[1468]: time="2025-03-19T11:29:19.771473460Z" level=info msg="RemovePodSandbox \"68c67c4db20d839e221731e53d70fe226f28871ccf7a690b34c4f68b38dd3dc3\" returns successfully" Mar 19 11:29:23.232796 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). Mar 19 11:29:23.279741 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:29:23.281280 sshd-session[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:29:23.285925 systemd-logind[1451]: New session 20 of user core. Mar 19 11:29:23.297559 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:29:23.471523 sshd[5610]: Connection closed by 10.0.0.1 port 35608 Mar 19 11:29:23.471917 sshd-session[5608]: pam_unix(sshd:session): session closed for user core Mar 19 11:29:23.475310 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:35608.service: Deactivated successfully. Mar 19 11:29:23.477211 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:29:23.478072 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:29:23.479190 systemd-logind[1451]: Removed session 20.