Feb 13 15:10:08.893969 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:10:08.893991 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:10:08.894001 kernel: KASLR enabled Feb 13 15:10:08.894006 kernel: efi: EFI v2.7 by EDK II Feb 13 15:10:08.894012 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:10:08.894017 kernel: random: crng init done Feb 13 15:10:08.894024 kernel: secureboot: Secure boot disabled Feb 13 15:10:08.894030 kernel: ACPI: Early table checksum verification disabled Feb 13 15:10:08.894036 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:10:08.894043 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:10:08.894050 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894055 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894061 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894067 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894075 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894082 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894088 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894095 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894101 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:08.894107 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:10:08.894113 kernel: NUMA: Failed to initialise from firmware Feb 13 15:10:08.894120 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:10:08.894126 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:10:08.894132 kernel: Zone ranges: Feb 13 15:10:08.894138 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:10:08.894145 kernel: DMA32 empty Feb 13 15:10:08.894151 kernel: Normal empty Feb 13 15:10:08.894157 kernel: Movable zone start for each node Feb 13 15:10:08.894163 kernel: Early memory node ranges Feb 13 15:10:08.894169 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:10:08.894176 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:10:08.894182 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:10:08.894188 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:10:08.894194 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:10:08.894200 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:10:08.894206 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:10:08.894212 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:10:08.894220 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:10:08.894226 kernel: psci: probing for conduit method from ACPI. Feb 13 15:10:08.894232 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:10:08.894241 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:10:08.894247 kernel: psci: Trusted OS migration not required Feb 13 15:10:08.894254 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:10:08.894262 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:10:08.894269 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:10:08.894276 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:10:08.894283 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:10:08.894289 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:10:08.894310 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:10:08.894317 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:10:08.894324 kernel: CPU features: detected: Spectre-v4 Feb 13 15:10:08.894331 kernel: CPU features: detected: Spectre-BHB Feb 13 15:10:08.894338 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:10:08.894346 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:10:08.894353 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:10:08.894360 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:10:08.894366 kernel: alternatives: applying boot alternatives Feb 13 15:10:08.894374 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:10:08.894381 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:10:08.894388 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:10:08.894395 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:10:08.894401 kernel: Fallback order for Node 0: 0 Feb 13 15:10:08.894408 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:10:08.894415 kernel: Policy zone: DMA Feb 13 15:10:08.894423 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:10:08.894430 kernel: software IO TLB: area num 4. Feb 13 15:10:08.894436 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:10:08.894443 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 15:10:08.894450 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:10:08.894457 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:10:08.894464 kernel: rcu: RCU event tracing is enabled. Feb 13 15:10:08.894471 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:10:08.894478 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:10:08.894484 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:10:08.894491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:10:08.894498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:10:08.894505 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:10:08.894512 kernel: GICv3: 256 SPIs implemented Feb 13 15:10:08.894518 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:10:08.894525 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:10:08.894532 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:10:08.894538 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:10:08.894545 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:10:08.894552 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:10:08.894559 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:10:08.894565 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:10:08.894572 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:10:08.894579 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:10:08.894586 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:08.894592 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:10:08.894599 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:10:08.894606 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:10:08.894612 kernel: arm-pv: using stolen time PV Feb 13 15:10:08.894619 kernel: Console: colour dummy device 80x25 Feb 13 15:10:08.894626 kernel: ACPI: Core revision 20230628 Feb 13 15:10:08.894633 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:10:08.894640 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:10:08.894657 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:10:08.894664 kernel: landlock: Up and running. Feb 13 15:10:08.894671 kernel: SELinux: Initializing. Feb 13 15:10:08.894677 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:10:08.894684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:10:08.894691 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:10:08.894698 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:10:08.894705 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:10:08.894712 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:10:08.894720 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:10:08.894727 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:10:08.894734 kernel: Remapping and enabling EFI services. Feb 13 15:10:08.894740 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:10:08.894747 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:10:08.894754 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:10:08.894761 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:10:08.894768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:08.894774 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:10:08.894781 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:10:08.894789 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:10:08.894796 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:10:08.894807 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:08.894815 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:10:08.894822 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:10:08.894829 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:10:08.894836 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:10:08.894843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:08.894850 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:10:08.894858 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:10:08.894865 kernel: SMP: Total of 4 processors activated. Feb 13 15:10:08.894872 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:10:08.894880 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:10:08.894887 kernel: CPU features: detected: Common not Private translations Feb 13 15:10:08.894894 kernel: CPU features: detected: CRC32 instructions Feb 13 15:10:08.894901 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:10:08.894908 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:10:08.894916 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:10:08.894923 kernel: CPU features: detected: Privileged Access Never Feb 13 15:10:08.894930 kernel: CPU features: detected: RAS Extension Support Feb 13 15:10:08.894937 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:10:08.894944 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:10:08.894951 kernel: alternatives: applying system-wide alternatives Feb 13 15:10:08.894963 kernel: devtmpfs: initialized Feb 13 15:10:08.894971 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:10:08.894978 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:10:08.894987 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:10:08.894994 kernel: SMBIOS 3.0.0 present. Feb 13 15:10:08.895001 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:10:08.895008 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:10:08.895015 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:10:08.895026 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:10:08.895034 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:10:08.895041 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:10:08.895048 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Feb 13 15:10:08.895056 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:10:08.895063 kernel: cpuidle: using governor menu Feb 13 15:10:08.895070 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:10:08.895078 kernel: ASID allocator initialised with 32768 entries Feb 13 15:10:08.895085 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:10:08.895092 kernel: Serial: AMBA PL011 UART driver Feb 13 15:10:08.895099 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:10:08.895106 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:10:08.895113 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:10:08.895121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:10:08.895128 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:10:08.895135 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:10:08.895142 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:10:08.895150 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:10:08.895157 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:10:08.895164 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:10:08.895171 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:10:08.895178 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:10:08.895186 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:10:08.895193 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:10:08.895200 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:10:08.895207 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:10:08.895214 kernel: ACPI: Interpreter enabled Feb 13 15:10:08.895221 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:10:08.895228 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:10:08.895236 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:10:08.895243 kernel: printk: console [ttyAMA0] enabled Feb 13 15:10:08.895250 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:10:08.895373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:10:08.895445 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:10:08.895509 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:10:08.895570 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:10:08.895630 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:10:08.895640 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:10:08.895713 kernel: PCI host bridge to bus 0000:00 Feb 13 15:10:08.895794 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:10:08.895854 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:10:08.895910 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:10:08.895974 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:10:08.896052 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:10:08.896124 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:10:08.896193 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:10:08.896257 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:10:08.896320 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:10:08.896384 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:10:08.896451 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:10:08.896514 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:10:08.896570 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:10:08.896627 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:10:08.896698 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:10:08.896709 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:10:08.896716 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:10:08.896723 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:10:08.896731 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:10:08.896738 kernel: iommu: Default domain type: Translated Feb 13 15:10:08.896745 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:10:08.896754 kernel: efivars: Registered efivars operations Feb 13 15:10:08.896761 kernel: vgaarb: loaded Feb 13 15:10:08.896768 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:10:08.896775 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:10:08.896783 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:10:08.896790 kernel: pnp: PnP ACPI init Feb 13 15:10:08.896863 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:10:08.896873 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:10:08.896882 kernel: NET: Registered PF_INET protocol family Feb 13 15:10:08.896889 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:10:08.896897 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:10:08.896904 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:10:08.896911 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:10:08.896919 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:10:08.896926 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:10:08.896933 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:10:08.896940 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:10:08.896949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:10:08.896962 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:10:08.896970 kernel: kvm [1]: HYP mode not available Feb 13 15:10:08.896977 kernel: Initialise system trusted keyrings Feb 13 15:10:08.896985 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:10:08.896992 kernel: Key type asymmetric registered Feb 13 15:10:08.896999 kernel: Asymmetric key parser 'x509' registered Feb 13 15:10:08.897006 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:10:08.897013 kernel: io scheduler mq-deadline registered Feb 13 15:10:08.897022 kernel: io scheduler kyber registered Feb 13 15:10:08.897029 kernel: io scheduler bfq registered Feb 13 15:10:08.897036 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:10:08.897043 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:10:08.897051 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:10:08.897120 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:10:08.897130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:10:08.897137 kernel: thunder_xcv, ver 1.0 Feb 13 15:10:08.897144 kernel: thunder_bgx, ver 1.0 Feb 13 15:10:08.897153 kernel: nicpf, ver 1.0 Feb 13 15:10:08.897160 kernel: nicvf, ver 1.0 Feb 13 15:10:08.897231 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:10:08.897291 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:10:08 UTC (1739459408) Feb 13 15:10:08.897301 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:10:08.897308 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:10:08.897315 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:10:08.897322 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:10:08.897331 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:10:08.897338 kernel: Segment Routing with IPv6 Feb 13 15:10:08.897345 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:10:08.897352 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:10:08.897359 kernel: Key type dns_resolver registered Feb 13 15:10:08.897366 kernel: registered taskstats version 1 Feb 13 15:10:08.897373 kernel: Loading compiled-in X.509 certificates Feb 13 15:10:08.897381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:10:08.897388 kernel: Key type .fscrypt registered Feb 13 15:10:08.897395 kernel: Key type fscrypt-provisioning registered Feb 13 15:10:08.897403 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:10:08.897410 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:10:08.897417 kernel: ima: No architecture policies found Feb 13 15:10:08.897424 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:10:08.897431 kernel: clk: Disabling unused clocks Feb 13 15:10:08.897438 kernel: Freeing unused kernel memory: 39680K Feb 13 15:10:08.897446 kernel: Run /init as init process Feb 13 15:10:08.897453 kernel: with arguments: Feb 13 15:10:08.897461 kernel: /init Feb 13 15:10:08.897468 kernel: with environment: Feb 13 15:10:08.897475 kernel: HOME=/ Feb 13 15:10:08.897482 kernel: TERM=linux Feb 13 15:10:08.897489 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:10:08.897497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:10:08.897507 systemd[1]: Detected virtualization kvm. Feb 13 15:10:08.897514 systemd[1]: Detected architecture arm64. Feb 13 15:10:08.897523 systemd[1]: Running in initrd. Feb 13 15:10:08.897530 systemd[1]: No hostname configured, using default hostname. Feb 13 15:10:08.897538 systemd[1]: Hostname set to . Feb 13 15:10:08.897546 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:10:08.897553 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:10:08.897561 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:10:08.897568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:10:08.897576 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:10:08.897586 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:10:08.897593 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:10:08.897601 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:10:08.897610 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:10:08.897618 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:10:08.897625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:10:08.897633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:10:08.897642 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:10:08.897658 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:10:08.897666 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:10:08.897674 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:10:08.897681 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:10:08.897689 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:10:08.897697 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:10:08.897705 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:10:08.897715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:10:08.897722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:10:08.897742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:10:08.897750 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:10:08.897758 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:10:08.897766 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:10:08.897774 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:10:08.897782 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:10:08.897789 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:10:08.897799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:10:08.897807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:08.897814 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:10:08.897822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:10:08.897830 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:10:08.897855 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:10:08.897876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:10:08.897884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:08.897894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:10:08.897903 systemd-journald[239]: Journal started Feb 13 15:10:08.897921 systemd-journald[239]: Runtime Journal (/run/log/journal/da92726cacdd489e8f68f171d6ec0a65) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:10:08.888547 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:10:08.899623 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:10:08.900592 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:10:08.903692 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:10:08.905762 kernel: Bridge firewalling registered Feb 13 15:10:08.905574 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:10:08.906797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:10:08.908433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:10:08.911678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:10:08.914093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:10:08.917312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:10:08.919707 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:08.920795 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:10:08.923484 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:10:08.925450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:10:08.927399 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:10:08.936420 dracut-cmdline[272]: dracut-dracut-053 Feb 13 15:10:08.938821 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:10:08.954590 systemd-resolved[275]: Positive Trust Anchors: Feb 13 15:10:08.956459 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:10:08.956493 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:10:08.961225 systemd-resolved[275]: Defaulting to hostname 'linux'. Feb 13 15:10:08.962124 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:10:08.962923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:10:09.004679 kernel: SCSI subsystem initialized Feb 13 15:10:09.008668 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:10:09.016675 kernel: iscsi: registered transport (tcp) Feb 13 15:10:09.030667 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:10:09.030686 kernel: QLogic iSCSI HBA Driver Feb 13 15:10:09.074707 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:10:09.083792 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:10:09.100923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:10:09.100988 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:10:09.101030 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:10:09.147677 kernel: raid6: neonx8 gen() 15780 MB/s Feb 13 15:10:09.164665 kernel: raid6: neonx4 gen() 15657 MB/s Feb 13 15:10:09.181669 kernel: raid6: neonx2 gen() 13237 MB/s Feb 13 15:10:09.198669 kernel: raid6: neonx1 gen() 10495 MB/s Feb 13 15:10:09.215660 kernel: raid6: int64x8 gen() 6963 MB/s Feb 13 15:10:09.232682 kernel: raid6: int64x4 gen() 7343 MB/s Feb 13 15:10:09.249704 kernel: raid6: int64x2 gen() 6128 MB/s Feb 13 15:10:09.266669 kernel: raid6: int64x1 gen() 5058 MB/s Feb 13 15:10:09.266697 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Feb 13 15:10:09.283676 kernel: raid6: .... xor() 11933 MB/s, rmw enabled Feb 13 15:10:09.283698 kernel: raid6: using neon recovery algorithm Feb 13 15:10:09.288666 kernel: xor: measuring software checksum speed Feb 13 15:10:09.288693 kernel: 8regs : 19807 MB/sec Feb 13 15:10:09.290040 kernel: 32regs : 18349 MB/sec Feb 13 15:10:09.290052 kernel: arm64_neon : 27087 MB/sec Feb 13 15:10:09.290061 kernel: xor: using function: arm64_neon (27087 MB/sec) Feb 13 15:10:09.341119 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:10:09.352243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:10:09.363803 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:10:09.374170 systemd-udevd[458]: Using default interface naming scheme 'v255'. Feb 13 15:10:09.377209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:10:09.387778 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:10:09.398766 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 15:10:09.423113 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:10:09.433848 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:10:09.469715 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:10:09.476838 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:10:09.488533 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:10:09.489788 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:10:09.492927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:10:09.494497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:10:09.502765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:10:09.511046 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:10:09.530510 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:10:09.530605 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:10:09.530616 kernel: GPT:9289727 != 19775487 Feb 13 15:10:09.530626 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:10:09.530635 kernel: GPT:9289727 != 19775487 Feb 13 15:10:09.530656 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:10:09.530674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:10:09.511352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:10:09.523361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:10:09.523458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:09.526994 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:10:09.527917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:10:09.528044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:09.531749 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:09.539871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:09.549708 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) Feb 13 15:10:09.550661 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (507) Feb 13 15:10:09.552824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:09.557332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:10:09.562172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:10:09.568162 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:10:09.569081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:10:09.574509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:10:09.585782 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:10:09.587713 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:10:09.591835 disk-uuid[549]: Primary Header is updated. Feb 13 15:10:09.591835 disk-uuid[549]: Secondary Entries is updated. Feb 13 15:10:09.591835 disk-uuid[549]: Secondary Header is updated. Feb 13 15:10:09.594661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:10:09.614232 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:10.602684 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:10:10.603299 disk-uuid[550]: The operation has completed successfully. Feb 13 15:10:10.630866 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:10:10.630983 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:10:10.648785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:10:10.652455 sh[570]: Success Feb 13 15:10:10.667679 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:10:10.704163 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:10:10.705737 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:10:10.707664 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:10:10.718281 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:10:10.718317 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:10.718328 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:10:10.719760 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:10:10.719776 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:10:10.723484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:10:10.724680 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:10:10.736798 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:10:10.738195 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:10:10.749327 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:10:10.749374 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:10.749384 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:10:10.751706 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:10:10.759867 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:10:10.761521 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:10:10.768403 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:10:10.775813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:10:10.832691 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:10:10.847818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:10:10.880033 systemd-networkd[764]: lo: Link UP Feb 13 15:10:10.880044 systemd-networkd[764]: lo: Gained carrier Feb 13 15:10:10.880212 ignition[669]: Ignition 2.20.0 Feb 13 15:10:10.881197 systemd-networkd[764]: Enumeration completed Feb 13 15:10:10.880218 ignition[669]: Stage: fetch-offline Feb 13 15:10:10.881770 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:10:10.880254 ignition[669]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:10.882812 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:10.880262 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:10.882855 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:10:10.880608 ignition[669]: parsed url from cmdline: "" Feb 13 15:10:10.884597 systemd[1]: Reached target network.target - Network. Feb 13 15:10:10.880611 ignition[669]: no config URL provided Feb 13 15:10:10.886108 systemd-networkd[764]: eth0: Link UP Feb 13 15:10:10.880616 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:10:10.886111 systemd-networkd[764]: eth0: Gained carrier Feb 13 15:10:10.880623 ignition[669]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:10:10.886118 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:10.880660 ignition[669]: op(1): [started] loading QEMU firmware config module Feb 13 15:10:10.880665 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:10:10.890003 ignition[669]: op(1): [finished] loading QEMU firmware config module Feb 13 15:10:10.905693 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:10:10.935193 ignition[669]: parsing config with SHA512: d5e74f1f5eed156982202f16fb64995673609d0aad27ebcd6d1aa7e15f1bc844cd39c4ab30d17b42d81a79db9c1525f60ff3399a644d48b0882ccdd6ce440887 Feb 13 15:10:10.941097 unknown[669]: fetched base config from "system" Feb 13 15:10:10.941107 unknown[669]: fetched user config from "qemu" Feb 13 15:10:10.941523 ignition[669]: fetch-offline: fetch-offline passed Feb 13 15:10:10.943294 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:10:10.941604 ignition[669]: Ignition finished successfully Feb 13 15:10:10.944384 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:10:10.955827 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:10:10.965627 ignition[770]: Ignition 2.20.0 Feb 13 15:10:10.965637 ignition[770]: Stage: kargs Feb 13 15:10:10.965822 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:10.965832 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:10.966702 ignition[770]: kargs: kargs passed Feb 13 15:10:10.966744 ignition[770]: Ignition finished successfully Feb 13 15:10:10.969302 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:10:10.970989 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:10:10.984188 ignition[780]: Ignition 2.20.0 Feb 13 15:10:10.984871 ignition[780]: Stage: disks Feb 13 15:10:10.985069 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:10.985079 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:10.987839 ignition[780]: disks: disks passed Feb 13 15:10:10.988378 ignition[780]: Ignition finished successfully Feb 13 15:10:10.990014 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:10:10.991005 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:10:10.992200 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:10:10.993795 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:10:10.995284 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:10:10.996555 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:10:11.017893 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:10:11.028074 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:10:11.032882 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:10:11.048836 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:10:11.088690 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:10:11.089231 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:10:11.090340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:10:11.107751 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:10:11.109969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:10:11.110858 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:10:11.110895 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:10:11.110917 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:10:11.117012 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:10:11.119243 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:10:11.124598 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Feb 13 15:10:11.124693 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:10:11.124766 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:11.125310 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:10:11.127699 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:10:11.130506 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:10:11.161357 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:10:11.165218 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:10:11.169184 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:10:11.171995 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:10:11.250523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:10:11.261809 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:10:11.264263 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:10:11.270661 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:10:11.288848 ignition[913]: INFO : Ignition 2.20.0 Feb 13 15:10:11.288848 ignition[913]: INFO : Stage: mount Feb 13 15:10:11.290186 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:11.290186 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:11.290186 ignition[913]: INFO : mount: mount passed Feb 13 15:10:11.290186 ignition[913]: INFO : Ignition finished successfully Feb 13 15:10:11.292152 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:10:11.294887 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:10:11.303809 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:10:11.717580 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:10:11.729896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:10:11.736207 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) Feb 13 15:10:11.736251 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:10:11.736262 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:11.737654 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:10:11.739658 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:10:11.740729 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:10:11.756864 ignition[941]: INFO : Ignition 2.20.0 Feb 13 15:10:11.756864 ignition[941]: INFO : Stage: files Feb 13 15:10:11.758240 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:11.758240 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:11.758240 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:10:11.760952 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:10:11.760952 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:10:11.764036 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:10:11.765110 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:10:11.765110 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:10:11.764530 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 15:10:11.768247 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:10:11.768247 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:10:12.326411 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:10:12.716879 systemd-networkd[764]: eth0: Gained IPv6LL Feb 13 15:10:12.788859 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:10:12.788859 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:10:12.792558 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:10:13.276437 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:10:14.319768 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:10:14.319768 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:10:14.322839 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:10:14.358684 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:10:14.363858 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:10:14.365035 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:10:14.365035 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:10:14.365035 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:10:14.365035 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:10:14.365035 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:10:14.365035 ignition[941]: INFO : files: files passed Feb 13 15:10:14.365035 ignition[941]: INFO : Ignition finished successfully Feb 13 15:10:14.366630 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:10:14.374819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:10:14.376405 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:10:14.378024 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:10:14.378117 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:10:14.385431 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:10:14.388639 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:10:14.388639 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:10:14.391218 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:10:14.391492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:10:14.393812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:10:14.405850 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:10:14.428676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:10:14.428796 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:10:14.430886 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:10:14.432849 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:10:14.434432 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:10:14.435408 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:10:14.454317 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:10:14.467842 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:10:14.476883 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:10:14.477931 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:10:14.479606 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:10:14.480979 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:10:14.481109 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:10:14.483355 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:10:14.484971 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:10:14.486266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:10:14.487539 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:10:14.489069 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:10:14.490483 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:10:14.491845 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:10:14.493701 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:10:14.495282 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:10:14.496713 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:10:14.498126 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:10:14.498399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:10:14.500032 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:10:14.501680 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:10:14.503295 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:10:14.506699 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:10:14.508606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:10:14.508819 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:10:14.510885 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:10:14.511015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:10:14.512620 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:10:14.513902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:10:14.516708 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:10:14.517760 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:10:14.519428 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:10:14.520808 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:10:14.520926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:10:14.522147 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:10:14.522234 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:10:14.523485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:10:14.523596 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:10:14.527020 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:10:14.527203 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:10:14.537864 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:10:14.539438 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:10:14.540166 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:10:14.540287 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:10:14.541890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:10:14.542006 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:10:14.548008 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:10:14.549054 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:10:14.552846 ignition[996]: INFO : Ignition 2.20.0 Feb 13 15:10:14.552846 ignition[996]: INFO : Stage: umount Feb 13 15:10:14.554374 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:14.554374 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:14.554374 ignition[996]: INFO : umount: umount passed Feb 13 15:10:14.554374 ignition[996]: INFO : Ignition finished successfully Feb 13 15:10:14.555623 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:10:14.556737 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:10:14.559302 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:10:14.560134 systemd[1]: Stopped target network.target - Network. Feb 13 15:10:14.561056 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:10:14.561124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:10:14.562567 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:10:14.562611 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:10:14.563905 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:10:14.563957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:10:14.565392 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:10:14.565434 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:10:14.567356 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:10:14.568657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:10:14.579599 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:10:14.579718 systemd-networkd[764]: eth0: DHCPv6 lease lost Feb 13 15:10:14.579836 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:10:14.582842 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:10:14.583020 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:10:14.585624 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:10:14.585713 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:10:14.593843 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:10:14.594546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:10:14.594605 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:10:14.596428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:10:14.596470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:10:14.597877 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:10:14.597920 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:10:14.599639 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:10:14.599695 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:10:14.601401 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:10:14.613282 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:10:14.613445 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:10:14.616565 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:10:14.616756 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:10:14.619707 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:10:14.619755 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:10:14.621954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:10:14.622086 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:10:14.623701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:10:14.623818 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:10:14.626276 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:10:14.626322 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:10:14.628750 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:10:14.628802 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:14.639872 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:10:14.640754 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:10:14.640815 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:10:14.642602 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:10:14.642669 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:10:14.644467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:10:14.644512 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:10:14.646360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:10:14.646399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:14.648398 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:10:14.649060 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:10:14.650837 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:10:14.652102 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:10:14.654568 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:10:14.656003 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:10:14.656097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:10:14.658476 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:10:14.673481 systemd[1]: Switching root. Feb 13 15:10:14.704839 systemd-journald[239]: Journal stopped Feb 13 15:10:15.379263 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:10:15.379323 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:10:15.379335 kernel: SELinux: policy capability open_perms=1 Feb 13 15:10:15.379354 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:10:15.379364 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:10:15.379378 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:10:15.379388 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:10:15.379398 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:10:15.379409 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:10:15.379419 kernel: audit: type=1403 audit(1739459414.836:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:10:15.379429 systemd[1]: Successfully loaded SELinux policy in 31.836ms. Feb 13 15:10:15.379442 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.694ms. Feb 13 15:10:15.379454 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:10:15.379465 systemd[1]: Detected virtualization kvm. Feb 13 15:10:15.379475 systemd[1]: Detected architecture arm64. Feb 13 15:10:15.379486 systemd[1]: Detected first boot. Feb 13 15:10:15.379502 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:10:15.379513 zram_generator::config[1041]: No configuration found. Feb 13 15:10:15.379529 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:10:15.379539 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:10:15.379550 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:10:15.379561 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:10:15.379573 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:10:15.379584 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:10:15.379597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:10:15.379609 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:10:15.379620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:10:15.379631 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:10:15.379653 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:10:15.379667 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:10:15.379678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:10:15.379689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:10:15.379700 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:10:15.379712 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:10:15.379723 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:10:15.379733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:10:15.379744 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:10:15.379755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:10:15.379765 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:10:15.379776 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:10:15.379786 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:10:15.379799 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:10:15.379810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:10:15.379821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:10:15.379831 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:10:15.379843 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:10:15.379854 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:10:15.379864 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:10:15.379875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:10:15.379885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:10:15.379897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:10:15.379908 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:10:15.379919 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:10:15.379934 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:10:15.379950 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:10:15.379962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:10:15.379972 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:10:15.379983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:10:15.379994 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:10:15.380006 systemd[1]: Reached target machines.target - Containers. Feb 13 15:10:15.380017 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:10:15.380028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:15.380039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:10:15.380050 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:10:15.380060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:15.380071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:10:15.380095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:15.380108 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:10:15.380119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:15.380130 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:10:15.380141 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:10:15.380152 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:10:15.380162 kernel: fuse: init (API version 7.39) Feb 13 15:10:15.380173 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:10:15.380184 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:10:15.380194 kernel: loop: module loaded Feb 13 15:10:15.380207 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:10:15.380217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:10:15.380228 kernel: ACPI: bus type drm_connector registered Feb 13 15:10:15.380239 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:10:15.380249 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:10:15.380281 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 15:10:15.380303 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:10:15.380314 systemd-journald[1108]: Journal started Feb 13 15:10:15.380338 systemd-journald[1108]: Runtime Journal (/run/log/journal/da92726cacdd489e8f68f171d6ec0a65) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:10:15.201382 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:10:15.217445 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:10:15.217808 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:10:15.381186 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:10:15.382038 systemd[1]: Stopped verity-setup.service. Feb 13 15:10:15.386091 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:10:15.386745 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:10:15.387810 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:10:15.388866 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:10:15.389794 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:10:15.390809 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:10:15.391899 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:10:15.393074 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:10:15.394373 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:10:15.395894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:10:15.396733 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:10:15.398053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:15.398193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:15.399462 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:10:15.399601 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:10:15.400863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:15.401017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:15.402277 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:10:15.402416 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:10:15.403877 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:15.404039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:15.405267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:10:15.406625 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:10:15.408058 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:10:15.420921 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:10:15.428794 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:10:15.430928 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:10:15.431875 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:10:15.431909 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:10:15.433727 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:10:15.435847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:10:15.438877 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:10:15.440051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:15.441758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:10:15.444046 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:10:15.445212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:10:15.448844 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:10:15.450009 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:10:15.451966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:10:15.453865 systemd-journald[1108]: Time spent on flushing to /var/log/journal/da92726cacdd489e8f68f171d6ec0a65 is 17.828ms for 855 entries. Feb 13 15:10:15.453865 systemd-journald[1108]: System Journal (/var/log/journal/da92726cacdd489e8f68f171d6ec0a65) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:10:15.489124 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 15:10:15.489194 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 15:10:15.458372 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:10:15.461419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:10:15.467478 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:10:15.468799 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:10:15.470114 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:10:15.471328 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:10:15.473545 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:10:15.478008 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:10:15.489926 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:10:15.495909 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:10:15.497540 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:10:15.499326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:10:15.503743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:10:15.514199 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:10:15.517147 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Feb 13 15:10:15.517165 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Feb 13 15:10:15.520347 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:10:15.522746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:10:15.524329 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:10:15.531703 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 15:10:15.534959 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:10:15.564254 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:10:15.567678 kernel: loop2: detected capacity change from 0 to 194512 Feb 13 15:10:15.577004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:10:15.593208 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:10:15.593231 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:10:15.597571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:10:15.625906 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 15:10:15.630685 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:10:15.635682 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 15:10:15.642076 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:10:15.642493 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 15:10:15.649708 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:10:15.649723 systemd[1]: Reloading... Feb 13 15:10:15.701682 zram_generator::config[1206]: No configuration found. Feb 13 15:10:15.778962 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:10:15.829440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:15.865901 systemd[1]: Reloading finished in 215 ms. Feb 13 15:10:15.892961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:10:15.894370 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:10:15.905854 systemd[1]: Starting ensure-sysext.service... Feb 13 15:10:15.907812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:10:15.915102 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:10:15.915117 systemd[1]: Reloading... Feb 13 15:10:15.926628 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:10:15.926947 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:10:15.927638 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:10:15.927888 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 15:10:15.927935 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 15:10:15.930192 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:10:15.930207 systemd-tmpfiles[1242]: Skipping /boot Feb 13 15:10:15.937827 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:10:15.937846 systemd-tmpfiles[1242]: Skipping /boot Feb 13 15:10:15.975665 zram_generator::config[1269]: No configuration found. Feb 13 15:10:16.066582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:16.103735 systemd[1]: Reloading finished in 188 ms. Feb 13 15:10:16.118189 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:10:16.131163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:10:16.137764 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:10:16.140136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:10:16.142341 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:10:16.147862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:10:16.154012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:10:16.156188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:10:16.164739 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:10:16.167327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:16.170085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:16.175132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:16.180723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:16.181676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:16.182475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:16.183700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:16.185340 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:10:16.187028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:16.187208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:16.192121 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:16.192331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:16.200603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:16.235837 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:16.238909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:16.239571 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Feb 13 15:10:16.240932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:16.241887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:16.247050 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:10:16.248673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:10:16.251569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:10:16.253740 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:10:16.255923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:16.256084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:16.257642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:16.257815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:16.259475 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:16.259688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:16.261214 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:10:16.266386 augenrules[1347]: No rules Feb 13 15:10:16.267485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:10:16.269660 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:10:16.269842 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:10:16.278459 systemd[1]: Finished ensure-sysext.service. Feb 13 15:10:16.295755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:10:16.296744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:16.298494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:16.301090 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:10:16.303063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:16.305653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:16.306678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:16.309934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:10:16.316475 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:10:16.317667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1370) Feb 13 15:10:16.317790 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:10:16.319908 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:10:16.344789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:16.345093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:16.347969 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:16.348376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:16.352416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:16.352619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:16.355841 augenrules[1379]: /sbin/augenrules: No change Feb 13 15:10:16.355964 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:10:16.356123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:10:16.357897 systemd-resolved[1308]: Positive Trust Anchors: Feb 13 15:10:16.357982 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:10:16.358014 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:10:16.359354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:10:16.359428 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:10:16.367061 systemd-resolved[1308]: Defaulting to hostname 'linux'. Feb 13 15:10:16.368595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:10:16.369850 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:10:16.385947 augenrules[1414]: No rules Feb 13 15:10:16.386879 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:10:16.387136 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:10:16.389988 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:10:16.396619 systemd-networkd[1388]: lo: Link UP Feb 13 15:10:16.396901 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:10:16.397048 systemd-networkd[1388]: lo: Gained carrier Feb 13 15:10:16.400265 systemd-networkd[1388]: Enumeration completed Feb 13 15:10:16.400465 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:10:16.401569 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:16.401608 systemd[1]: Reached target network.target - Network. Feb 13 15:10:16.401741 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:10:16.402704 systemd-networkd[1388]: eth0: Link UP Feb 13 15:10:16.402811 systemd-networkd[1388]: eth0: Gained carrier Feb 13 15:10:16.402829 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:16.403916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:10:16.421638 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:10:16.424763 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:10:16.425837 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:10:16.426828 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:10:16.426954 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:10:16.427065 systemd-timesyncd[1389]: Initial clock synchronization to Thu 2025-02-13 15:10:16.088356 UTC. Feb 13 15:10:16.428761 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:10:16.451975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:16.460107 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:10:16.464258 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:10:16.493946 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:10:16.502427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:16.538544 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:10:16.539893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:10:16.540792 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:10:16.541778 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:10:16.542721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:10:16.543912 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:10:16.544961 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:10:16.545930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:10:16.547032 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:10:16.547067 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:10:16.547746 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:10:16.549397 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:10:16.551839 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:10:16.567793 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:10:16.570450 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:10:16.571967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:10:16.572932 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:10:16.573735 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:10:16.574597 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:10:16.574635 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:10:16.575755 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:10:16.577672 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:10:16.579064 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:10:16.580801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:10:16.583527 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:10:16.584429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:10:16.589114 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:10:16.597356 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:10:16.599730 jq[1441]: false Feb 13 15:10:16.599823 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:10:16.602327 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:10:16.603407 extend-filesystems[1442]: Found loop3 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found loop4 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found loop5 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda1 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda2 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda3 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found usr Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda4 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda6 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda7 Feb 13 15:10:16.603407 extend-filesystems[1442]: Found vda9 Feb 13 15:10:16.603407 extend-filesystems[1442]: Checking size of /dev/vda9 Feb 13 15:10:16.619182 dbus-daemon[1440]: [system] SELinux support is enabled Feb 13 15:10:16.606891 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:10:16.629310 extend-filesystems[1442]: Resized partition /dev/vda9 Feb 13 15:10:16.613302 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:10:16.613876 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:10:16.614982 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:10:16.618085 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:10:16.619517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:10:16.623474 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:10:16.626613 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:10:16.626806 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:10:16.627110 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:10:16.627262 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:10:16.632267 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:10:16.635971 jq[1457]: true Feb 13 15:10:16.632434 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:10:16.636475 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:10:16.640662 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:10:16.647102 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:10:16.647152 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:10:16.647660 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1358) Feb 13 15:10:16.652881 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:10:16.652924 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:10:16.658295 jq[1466]: true Feb 13 15:10:16.679163 update_engine[1456]: I20250213 15:10:16.678982 1456 main.cc:92] Flatcar Update Engine starting Feb 13 15:10:16.685722 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:10:16.685119 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:10:16.686593 tar[1464]: linux-arm64/helm Feb 13 15:10:16.691189 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:10:16.701773 update_engine[1456]: I20250213 15:10:16.691241 1456 update_check_scheduler.cc:74] Next update check in 7m5s Feb 13 15:10:16.694869 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:10:16.702170 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:10:16.702170 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:10:16.702170 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:10:16.709951 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Feb 13 15:10:16.704394 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:10:16.704575 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:10:16.711745 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:10:16.713709 systemd-logind[1454]: New seat seat0. Feb 13 15:10:16.715629 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:10:16.773344 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:10:16.782728 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:10:16.785517 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:10:16.792365 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:10:16.934641 containerd[1475]: time="2025-02-13T15:10:16.934497400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:10:16.968220 containerd[1475]: time="2025-02-13T15:10:16.967948520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.969498 containerd[1475]: time="2025-02-13T15:10:16.969451760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.969611400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.969637600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.969822960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.969842600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.969901800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.969914160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.970099480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.970115800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.970130040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.970139480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.970216320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970693 containerd[1475]: time="2025-02-13T15:10:16.970410280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970968 containerd[1475]: time="2025-02-13T15:10:16.970500240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:16.970968 containerd[1475]: time="2025-02-13T15:10:16.970513320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:10:16.970968 containerd[1475]: time="2025-02-13T15:10:16.970584160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:10:16.970968 containerd[1475]: time="2025-02-13T15:10:16.970631040Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:10:16.979413 containerd[1475]: time="2025-02-13T15:10:16.978507800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:10:16.979413 containerd[1475]: time="2025-02-13T15:10:16.978610240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:10:16.979413 containerd[1475]: time="2025-02-13T15:10:16.978629640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:10:16.979413 containerd[1475]: time="2025-02-13T15:10:16.978666000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:10:16.979413 containerd[1475]: time="2025-02-13T15:10:16.978767600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:10:16.979413 containerd[1475]: time="2025-02-13T15:10:16.979030320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:10:16.979622 containerd[1475]: time="2025-02-13T15:10:16.979502480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:10:16.979765 containerd[1475]: time="2025-02-13T15:10:16.979726960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:10:16.979765 containerd[1475]: time="2025-02-13T15:10:16.979760480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:10:16.979823 containerd[1475]: time="2025-02-13T15:10:16.979789400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:10:16.979823 containerd[1475]: time="2025-02-13T15:10:16.979805720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.979823 containerd[1475]: time="2025-02-13T15:10:16.979818280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.979873 containerd[1475]: time="2025-02-13T15:10:16.979838560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.979873 containerd[1475]: time="2025-02-13T15:10:16.979859680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.979968 containerd[1475]: time="2025-02-13T15:10:16.979920640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.979995 containerd[1475]: time="2025-02-13T15:10:16.979967840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.979995 containerd[1475]: time="2025-02-13T15:10:16.979984880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.980028 containerd[1475]: time="2025-02-13T15:10:16.979996960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:10:16.980053 containerd[1475]: time="2025-02-13T15:10:16.980027800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980053 containerd[1475]: time="2025-02-13T15:10:16.980046480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980096 containerd[1475]: time="2025-02-13T15:10:16.980065160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980096 containerd[1475]: time="2025-02-13T15:10:16.980077600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980135 containerd[1475]: time="2025-02-13T15:10:16.980098160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980135 containerd[1475]: time="2025-02-13T15:10:16.980112400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980135 containerd[1475]: time="2025-02-13T15:10:16.980124280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980187 containerd[1475]: time="2025-02-13T15:10:16.980138480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980187 containerd[1475]: time="2025-02-13T15:10:16.980153080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980187 containerd[1475]: time="2025-02-13T15:10:16.980179280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980236 containerd[1475]: time="2025-02-13T15:10:16.980196720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980236 containerd[1475]: time="2025-02-13T15:10:16.980209000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980236 containerd[1475]: time="2025-02-13T15:10:16.980222080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980303 containerd[1475]: time="2025-02-13T15:10:16.980284680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:10:16.980333 containerd[1475]: time="2025-02-13T15:10:16.980321240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980353 containerd[1475]: time="2025-02-13T15:10:16.980340560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.980371 containerd[1475]: time="2025-02-13T15:10:16.980362200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:10:16.980891 containerd[1475]: time="2025-02-13T15:10:16.980870160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:10:16.980920 containerd[1475]: time="2025-02-13T15:10:16.980909280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:10:16.980966 containerd[1475]: time="2025-02-13T15:10:16.980922920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:10:16.980966 containerd[1475]: time="2025-02-13T15:10:16.980953480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:10:16.981005 containerd[1475]: time="2025-02-13T15:10:16.980966200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.981112 containerd[1475]: time="2025-02-13T15:10:16.981084840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:10:16.981112 containerd[1475]: time="2025-02-13T15:10:16.981110480Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:10:16.981159 containerd[1475]: time="2025-02-13T15:10:16.981123560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:10:16.981919 containerd[1475]: time="2025-02-13T15:10:16.981847520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:10:16.981919 containerd[1475]: time="2025-02-13T15:10:16.981913040Z" level=info msg="Connect containerd service" Feb 13 15:10:16.982075 containerd[1475]: time="2025-02-13T15:10:16.981971800Z" level=info msg="using legacy CRI server" Feb 13 15:10:16.982075 containerd[1475]: time="2025-02-13T15:10:16.981982120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:10:16.982564 containerd[1475]: time="2025-02-13T15:10:16.982523040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:10:16.983713 containerd[1475]: time="2025-02-13T15:10:16.983677040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:10:16.985033 containerd[1475]: time="2025-02-13T15:10:16.984113240Z" level=info msg="Start subscribing containerd event" Feb 13 15:10:16.985033 containerd[1475]: time="2025-02-13T15:10:16.984174800Z" level=info msg="Start recovering state" Feb 13 15:10:16.985033 containerd[1475]: time="2025-02-13T15:10:16.984242560Z" level=info msg="Start event monitor" Feb 13 15:10:16.985033 containerd[1475]: time="2025-02-13T15:10:16.984253960Z" level=info msg="Start snapshots syncer" Feb 13 15:10:16.985033 containerd[1475]: time="2025-02-13T15:10:16.984264000Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:10:16.985033 containerd[1475]: time="2025-02-13T15:10:16.984271920Z" level=info msg="Start streaming server" Feb 13 15:10:16.985449 containerd[1475]: time="2025-02-13T15:10:16.985404200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:10:16.985494 containerd[1475]: time="2025-02-13T15:10:16.985468640Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:10:16.985539 containerd[1475]: time="2025-02-13T15:10:16.985526040Z" level=info msg="containerd successfully booted in 0.051956s" Feb 13 15:10:16.985637 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:10:17.041675 tar[1464]: linux-arm64/LICENSE Feb 13 15:10:17.041675 tar[1464]: linux-arm64/README.md Feb 13 15:10:17.055378 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:10:17.062392 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:10:17.084692 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:10:17.097038 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:10:17.103185 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:10:17.104692 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:10:17.108933 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:10:17.125787 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:10:17.142083 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:10:17.144296 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:10:17.145423 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:10:17.708799 systemd-networkd[1388]: eth0: Gained IPv6LL Feb 13 15:10:17.710952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:10:17.715435 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:10:17.731958 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:10:17.734658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:17.736561 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:10:17.766204 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:10:17.766423 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:10:17.768303 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:10:17.774589 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:10:18.271730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:18.272935 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:10:18.275883 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:10:18.276699 systemd[1]: Startup finished in 525ms (kernel) + 6.135s (initrd) + 3.473s (userspace) = 10.135s. Feb 13 15:10:18.826497 kubelet[1554]: E0213 15:10:18.826415 1554 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:10:18.829766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:10:18.829934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:10:21.015063 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:10:21.016123 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:56794.service - OpenSSH per-connection server daemon (10.0.0.1:56794). Feb 13 15:10:21.076366 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 56794 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:21.080207 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:21.088635 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:10:21.104955 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:10:21.108228 systemd-logind[1454]: New session 1 of user core. Feb 13 15:10:21.113475 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:10:21.115482 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:10:21.121666 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:10:21.193160 systemd[1572]: Queued start job for default target default.target. Feb 13 15:10:21.202637 systemd[1572]: Created slice app.slice - User Application Slice. Feb 13 15:10:21.202691 systemd[1572]: Reached target paths.target - Paths. Feb 13 15:10:21.202704 systemd[1572]: Reached target timers.target - Timers. Feb 13 15:10:21.203872 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:10:21.212717 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:10:21.212776 systemd[1572]: Reached target sockets.target - Sockets. Feb 13 15:10:21.212788 systemd[1572]: Reached target basic.target - Basic System. Feb 13 15:10:21.212823 systemd[1572]: Reached target default.target - Main User Target. Feb 13 15:10:21.212847 systemd[1572]: Startup finished in 86ms. Feb 13 15:10:21.213060 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:10:21.214432 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:10:21.272690 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Feb 13 15:10:21.312620 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:21.313826 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:21.317375 systemd-logind[1454]: New session 2 of user core. Feb 13 15:10:21.328866 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:10:21.380047 sshd[1585]: Connection closed by 10.0.0.1 port 56796 Feb 13 15:10:21.380840 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:21.395087 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:56796.service: Deactivated successfully. Feb 13 15:10:21.396574 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:10:21.397737 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:10:21.399392 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:56804.service - OpenSSH per-connection server daemon (10.0.0.1:56804). Feb 13 15:10:21.401375 systemd-logind[1454]: Removed session 2. Feb 13 15:10:21.438794 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 56804 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:21.439978 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:21.443494 systemd-logind[1454]: New session 3 of user core. Feb 13 15:10:21.454887 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:10:21.502453 sshd[1592]: Connection closed by 10.0.0.1 port 56804 Feb 13 15:10:21.502850 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:21.514010 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:56804.service: Deactivated successfully. Feb 13 15:10:21.515262 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:10:21.521866 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:10:21.524175 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:56806.service - OpenSSH per-connection server daemon (10.0.0.1:56806). Feb 13 15:10:21.526402 systemd-logind[1454]: Removed session 3. Feb 13 15:10:21.565802 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 56806 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:21.566250 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:21.570319 systemd-logind[1454]: New session 4 of user core. Feb 13 15:10:21.579810 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:10:21.630086 sshd[1599]: Connection closed by 10.0.0.1 port 56806 Feb 13 15:10:21.630940 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:21.641905 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:56806.service: Deactivated successfully. Feb 13 15:10:21.643913 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:10:21.645733 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:10:21.654965 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:56822.service - OpenSSH per-connection server daemon (10.0.0.1:56822). Feb 13 15:10:21.657059 systemd-logind[1454]: Removed session 4. Feb 13 15:10:21.693685 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 56822 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:21.694877 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:21.699654 systemd-logind[1454]: New session 5 of user core. Feb 13 15:10:21.710281 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:10:21.780283 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:10:21.782545 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:10:21.797360 sudo[1607]: pam_unix(sudo:session): session closed for user root Feb 13 15:10:21.798852 sshd[1606]: Connection closed by 10.0.0.1 port 56822 Feb 13 15:10:21.799507 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:21.810152 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:56822.service: Deactivated successfully. Feb 13 15:10:21.811442 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:10:21.813724 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:10:21.819881 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:56830.service - OpenSSH per-connection server daemon (10.0.0.1:56830). Feb 13 15:10:21.820687 systemd-logind[1454]: Removed session 5. Feb 13 15:10:21.856118 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 56830 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:21.857437 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:21.861219 systemd-logind[1454]: New session 6 of user core. Feb 13 15:10:21.872798 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:10:21.923962 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:10:21.924253 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:10:21.927305 sudo[1616]: pam_unix(sudo:session): session closed for user root Feb 13 15:10:21.933923 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:10:21.934244 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:10:21.952042 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:10:21.975183 augenrules[1638]: No rules Feb 13 15:10:21.976485 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:10:21.976755 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:10:21.977716 sudo[1615]: pam_unix(sudo:session): session closed for user root Feb 13 15:10:21.979050 sshd[1614]: Connection closed by 10.0.0.1 port 56830 Feb 13 15:10:21.979629 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:21.988075 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:56830.service: Deactivated successfully. Feb 13 15:10:21.989528 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:10:21.990828 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:10:22.005955 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:56836.service - OpenSSH per-connection server daemon (10.0.0.1:56836). Feb 13 15:10:22.006801 systemd-logind[1454]: Removed session 6. Feb 13 15:10:22.042279 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 56836 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:10:22.043548 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:22.047390 systemd-logind[1454]: New session 7 of user core. Feb 13 15:10:22.057842 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:10:22.108292 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:10:22.108578 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:10:22.416002 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:10:22.416166 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:10:22.652641 dockerd[1670]: time="2025-02-13T15:10:22.652581660Z" level=info msg="Starting up" Feb 13 15:10:22.824264 dockerd[1670]: time="2025-02-13T15:10:22.824093141Z" level=info msg="Loading containers: start." Feb 13 15:10:22.973667 kernel: Initializing XFRM netlink socket Feb 13 15:10:23.041136 systemd-networkd[1388]: docker0: Link UP Feb 13 15:10:23.071879 dockerd[1670]: time="2025-02-13T15:10:23.071815502Z" level=info msg="Loading containers: done." Feb 13 15:10:23.084267 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2126203935-merged.mount: Deactivated successfully. Feb 13 15:10:23.085636 dockerd[1670]: time="2025-02-13T15:10:23.085561156Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:10:23.085721 dockerd[1670]: time="2025-02-13T15:10:23.085704620Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:10:23.085836 dockerd[1670]: time="2025-02-13T15:10:23.085807509Z" level=info msg="Daemon has completed initialization" Feb 13 15:10:23.118023 dockerd[1670]: time="2025-02-13T15:10:23.117963629Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:10:23.118213 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:10:23.992564 containerd[1475]: time="2025-02-13T15:10:23.992407247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:10:24.760116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545745891.mount: Deactivated successfully. Feb 13 15:10:26.012090 containerd[1475]: time="2025-02-13T15:10:26.012041743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:26.013093 containerd[1475]: time="2025-02-13T15:10:26.013021443Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205863" Feb 13 15:10:26.016746 containerd[1475]: time="2025-02-13T15:10:26.016707917Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:26.024719 containerd[1475]: time="2025-02-13T15:10:26.023471475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:26.024719 containerd[1475]: time="2025-02-13T15:10:26.024685754Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.032232657s" Feb 13 15:10:26.024719 containerd[1475]: time="2025-02-13T15:10:26.024714543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:10:26.043693 containerd[1475]: time="2025-02-13T15:10:26.043631196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:10:27.859275 containerd[1475]: time="2025-02-13T15:10:27.859218587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:27.860301 containerd[1475]: time="2025-02-13T15:10:27.860227672Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383093" Feb 13 15:10:27.861204 containerd[1475]: time="2025-02-13T15:10:27.861163976Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:27.866282 containerd[1475]: time="2025-02-13T15:10:27.866226448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:27.867098 containerd[1475]: time="2025-02-13T15:10:27.866977912Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.823291369s" Feb 13 15:10:27.867098 containerd[1475]: time="2025-02-13T15:10:27.867016518Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:10:27.889491 containerd[1475]: time="2025-02-13T15:10:27.889410957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:10:29.049535 containerd[1475]: time="2025-02-13T15:10:29.049468966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:29.050907 containerd[1475]: time="2025-02-13T15:10:29.050855452Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766982" Feb 13 15:10:29.051878 containerd[1475]: time="2025-02-13T15:10:29.051846814Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:29.054641 containerd[1475]: time="2025-02-13T15:10:29.054601028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:29.055901 containerd[1475]: time="2025-02-13T15:10:29.055866315Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.166414712s" Feb 13 15:10:29.055952 containerd[1475]: time="2025-02-13T15:10:29.055901453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:10:29.075350 containerd[1475]: time="2025-02-13T15:10:29.075303535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:10:29.080122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:10:29.099878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:29.190328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:29.194884 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:10:29.262901 kubelet[1964]: E0213 15:10:29.262797 1964 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:10:29.266681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:10:29.266835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:10:30.154881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111100976.mount: Deactivated successfully. Feb 13 15:10:30.510832 containerd[1475]: time="2025-02-13T15:10:30.510703124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:30.511571 containerd[1475]: time="2025-02-13T15:10:30.511518004Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377" Feb 13 15:10:30.512315 containerd[1475]: time="2025-02-13T15:10:30.512280517Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:30.514735 containerd[1475]: time="2025-02-13T15:10:30.514672591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:30.515456 containerd[1475]: time="2025-02-13T15:10:30.515356058Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.440009879s" Feb 13 15:10:30.515456 containerd[1475]: time="2025-02-13T15:10:30.515400524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:10:30.546305 containerd[1475]: time="2025-02-13T15:10:30.546267017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:10:31.176318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430105500.mount: Deactivated successfully. Feb 13 15:10:31.842839 containerd[1475]: time="2025-02-13T15:10:31.842781128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:31.843982 containerd[1475]: time="2025-02-13T15:10:31.843945086Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:10:31.845629 containerd[1475]: time="2025-02-13T15:10:31.845590642Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:31.848235 containerd[1475]: time="2025-02-13T15:10:31.848183577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:31.849728 containerd[1475]: time="2025-02-13T15:10:31.849689252Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.303383239s" Feb 13 15:10:31.849767 containerd[1475]: time="2025-02-13T15:10:31.849726209Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:10:31.868196 containerd[1475]: time="2025-02-13T15:10:31.868145356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:10:32.723470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378364016.mount: Deactivated successfully. Feb 13 15:10:32.728262 containerd[1475]: time="2025-02-13T15:10:32.728219549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:32.728693 containerd[1475]: time="2025-02-13T15:10:32.728654691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:10:32.729639 containerd[1475]: time="2025-02-13T15:10:32.729613468Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:32.732604 containerd[1475]: time="2025-02-13T15:10:32.732567193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:32.733566 containerd[1475]: time="2025-02-13T15:10:32.733521236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 865.337764ms" Feb 13 15:10:32.733566 containerd[1475]: time="2025-02-13T15:10:32.733554326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:10:32.753885 containerd[1475]: time="2025-02-13T15:10:32.753801503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:10:33.385577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176543060.mount: Deactivated successfully. Feb 13 15:10:35.257005 containerd[1475]: time="2025-02-13T15:10:35.256944658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:35.257944 containerd[1475]: time="2025-02-13T15:10:35.257887479Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Feb 13 15:10:35.258879 containerd[1475]: time="2025-02-13T15:10:35.258853731Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:35.261953 containerd[1475]: time="2025-02-13T15:10:35.261924624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:35.263308 containerd[1475]: time="2025-02-13T15:10:35.263226228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.509348228s" Feb 13 15:10:35.263308 containerd[1475]: time="2025-02-13T15:10:35.263263644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:10:39.517059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:10:39.527279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:39.596508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:10:39.596582 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:10:39.596805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:39.615033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:39.632734 systemd[1]: Reloading requested from client PID 2182 ('systemctl') (unit session-7.scope)... Feb 13 15:10:39.632752 systemd[1]: Reloading... Feb 13 15:10:39.691714 zram_generator::config[2221]: No configuration found. Feb 13 15:10:39.811035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:39.862300 systemd[1]: Reloading finished in 229 ms. Feb 13 15:10:39.914203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:39.917094 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:10:39.917286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:39.919865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:40.008005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:40.012234 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:10:40.053873 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:40.053873 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:10:40.053873 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:40.054192 kubelet[2268]: I0213 15:10:40.053911 2268 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:10:40.422906 kubelet[2268]: I0213 15:10:40.422795 2268 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:10:40.422906 kubelet[2268]: I0213 15:10:40.422835 2268 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:10:40.423371 kubelet[2268]: I0213 15:10:40.423192 2268 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:10:40.449196 kubelet[2268]: I0213 15:10:40.449156 2268 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:10:40.454174 kubelet[2268]: E0213 15:10:40.454104 2268 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.462165 kubelet[2268]: I0213 15:10:40.462102 2268 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:10:40.462851 kubelet[2268]: I0213 15:10:40.462816 2268 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:10:40.463061 kubelet[2268]: I0213 15:10:40.463038 2268 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:10:40.463061 kubelet[2268]: I0213 15:10:40.463062 2268 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:10:40.463171 kubelet[2268]: I0213 15:10:40.463071 2268 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:10:40.463199 kubelet[2268]: I0213 15:10:40.463193 2268 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:40.468750 kubelet[2268]: I0213 15:10:40.468711 2268 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:10:40.469182 kubelet[2268]: I0213 15:10:40.469156 2268 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:10:40.469222 kubelet[2268]: I0213 15:10:40.469190 2268 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:10:40.469222 kubelet[2268]: I0213 15:10:40.469206 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:10:40.469304 kubelet[2268]: W0213 15:10:40.469256 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.469340 kubelet[2268]: E0213 15:10:40.469329 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.470256 kubelet[2268]: W0213 15:10:40.470205 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.472666 kubelet[2268]: E0213 15:10:40.471367 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.472666 kubelet[2268]: I0213 15:10:40.471230 2268 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:10:40.472666 kubelet[2268]: I0213 15:10:40.471890 2268 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:10:40.472666 kubelet[2268]: W0213 15:10:40.471994 2268 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:10:40.472860 kubelet[2268]: I0213 15:10:40.472836 2268 server.go:1256] "Started kubelet" Feb 13 15:10:40.472962 kubelet[2268]: I0213 15:10:40.472944 2268 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:10:40.473832 kubelet[2268]: I0213 15:10:40.473803 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:10:40.474080 kubelet[2268]: I0213 15:10:40.474058 2268 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:10:40.482101 kubelet[2268]: I0213 15:10:40.482046 2268 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:10:40.487044 kubelet[2268]: E0213 15:10:40.486753 2268 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd252294f0b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:10:40.47281375 +0000 UTC m=+0.457197853,LastTimestamp:2025-02-13 15:10:40.47281375 +0000 UTC m=+0.457197853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:10:40.487245 kubelet[2268]: I0213 15:10:40.487194 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:10:40.487623 kubelet[2268]: I0213 15:10:40.487593 2268 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:10:40.487838 kubelet[2268]: I0213 15:10:40.487821 2268 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:10:40.487910 kubelet[2268]: I0213 15:10:40.487899 2268 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:10:40.489133 kubelet[2268]: W0213 15:10:40.489088 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.489263 kubelet[2268]: E0213 15:10:40.489143 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.489263 kubelet[2268]: E0213 15:10:40.489220 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Feb 13 15:10:40.489556 kubelet[2268]: E0213 15:10:40.489480 2268 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:10:40.490162 kubelet[2268]: I0213 15:10:40.489591 2268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:10:40.491040 kubelet[2268]: I0213 15:10:40.491017 2268 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:10:40.491040 kubelet[2268]: I0213 15:10:40.491034 2268 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:10:40.502259 kubelet[2268]: I0213 15:10:40.502117 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:10:40.504351 kubelet[2268]: I0213 15:10:40.503779 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:10:40.504351 kubelet[2268]: I0213 15:10:40.503808 2268 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:10:40.504351 kubelet[2268]: I0213 15:10:40.503825 2268 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:10:40.504351 kubelet[2268]: E0213 15:10:40.503880 2268 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:10:40.504351 kubelet[2268]: W0213 15:10:40.504319 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.504536 kubelet[2268]: E0213 15:10:40.504363 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:40.505433 kubelet[2268]: I0213 15:10:40.505406 2268 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:10:40.505433 kubelet[2268]: I0213 15:10:40.505429 2268 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:10:40.505529 kubelet[2268]: I0213 15:10:40.505445 2268 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:40.589797 kubelet[2268]: I0213 15:10:40.589765 2268 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:10:40.590266 kubelet[2268]: E0213 15:10:40.590238 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 15:10:40.604385 kubelet[2268]: E0213 15:10:40.604334 2268 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:10:40.613189 kubelet[2268]: I0213 15:10:40.613152 2268 policy_none.go:49] "None policy: Start" Feb 13 15:10:40.613944 kubelet[2268]: I0213 15:10:40.613924 2268 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:10:40.614017 kubelet[2268]: I0213 15:10:40.613972 2268 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:10:40.620360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:10:40.640180 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:10:40.643079 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:10:40.653591 kubelet[2268]: I0213 15:10:40.653554 2268 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:10:40.653930 kubelet[2268]: I0213 15:10:40.653897 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:10:40.657553 kubelet[2268]: E0213 15:10:40.657510 2268 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:10:40.690740 kubelet[2268]: E0213 15:10:40.690582 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Feb 13 15:10:40.792108 kubelet[2268]: I0213 15:10:40.792073 2268 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:10:40.792422 kubelet[2268]: E0213 15:10:40.792408 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 15:10:40.804491 kubelet[2268]: I0213 15:10:40.804434 2268 topology_manager.go:215] "Topology Admit Handler" podUID="a02d69d6090aa826a3683016e399c74b" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:10:40.805690 kubelet[2268]: I0213 15:10:40.805418 2268 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:10:40.806421 kubelet[2268]: I0213 15:10:40.806345 2268 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:10:40.813984 systemd[1]: Created slice kubepods-burstable-poda02d69d6090aa826a3683016e399c74b.slice - libcontainer container kubepods-burstable-poda02d69d6090aa826a3683016e399c74b.slice. Feb 13 15:10:40.835183 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:10:40.838486 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:10:40.890721 kubelet[2268]: I0213 15:10:40.890634 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a02d69d6090aa826a3683016e399c74b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a02d69d6090aa826a3683016e399c74b\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:40.890721 kubelet[2268]: I0213 15:10:40.890690 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a02d69d6090aa826a3683016e399c74b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a02d69d6090aa826a3683016e399c74b\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:40.890721 kubelet[2268]: I0213 15:10:40.890715 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a02d69d6090aa826a3683016e399c74b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a02d69d6090aa826a3683016e399c74b\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:40.890721 kubelet[2268]: I0213 15:10:40.890748 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:40.891098 kubelet[2268]: I0213 15:10:40.890767 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:40.891098 kubelet[2268]: I0213 15:10:40.890789 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:40.891098 kubelet[2268]: I0213 15:10:40.890808 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:40.891098 kubelet[2268]: I0213 15:10:40.890840 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:40.891098 kubelet[2268]: I0213 15:10:40.890908 2268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:10:41.091736 kubelet[2268]: E0213 15:10:41.091618 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Feb 13 15:10:41.132893 kubelet[2268]: E0213 15:10:41.132848 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:41.133573 containerd[1475]: time="2025-02-13T15:10:41.133538216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a02d69d6090aa826a3683016e399c74b,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:41.137761 kubelet[2268]: E0213 15:10:41.137708 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:41.138118 containerd[1475]: time="2025-02-13T15:10:41.138088594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:41.141539 kubelet[2268]: E0213 15:10:41.141322 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:41.141709 containerd[1475]: time="2025-02-13T15:10:41.141676906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:41.194024 kubelet[2268]: I0213 15:10:41.193988 2268 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:10:41.194299 kubelet[2268]: E0213 15:10:41.194285 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 15:10:41.279974 kubelet[2268]: W0213 15:10:41.279919 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.279974 kubelet[2268]: E0213 15:10:41.279976 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.472261 kubelet[2268]: W0213 15:10:41.472121 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.472261 kubelet[2268]: E0213 15:10:41.472167 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.625928 kubelet[2268]: W0213 15:10:41.625868 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.625928 kubelet[2268]: E0213 15:10:41.625917 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.688668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280662033.mount: Deactivated successfully. Feb 13 15:10:41.692355 containerd[1475]: time="2025-02-13T15:10:41.692018740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:41.693188 containerd[1475]: time="2025-02-13T15:10:41.693152072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:10:41.694668 containerd[1475]: time="2025-02-13T15:10:41.694618392Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:41.696616 containerd[1475]: time="2025-02-13T15:10:41.696579022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:41.701833 containerd[1475]: time="2025-02-13T15:10:41.701788866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:10:41.704165 containerd[1475]: time="2025-02-13T15:10:41.704112592Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:41.705026 containerd[1475]: time="2025-02-13T15:10:41.704992719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:41.705967 containerd[1475]: time="2025-02-13T15:10:41.705928151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.308994ms" Feb 13 15:10:41.706913 containerd[1475]: time="2025-02-13T15:10:41.706872687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:10:41.718197 containerd[1475]: time="2025-02-13T15:10:41.718082339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.341942ms" Feb 13 15:10:41.722655 containerd[1475]: time="2025-02-13T15:10:41.722486568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.329492ms" Feb 13 15:10:41.808409 kubelet[2268]: W0213 15:10:41.808285 2268 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.808409 kubelet[2268]: E0213 15:10:41.808348 2268 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 15:10:41.865469 containerd[1475]: time="2025-02-13T15:10:41.864454254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:41.865469 containerd[1475]: time="2025-02-13T15:10:41.865368123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:41.865469 containerd[1475]: time="2025-02-13T15:10:41.865382738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:41.865469 containerd[1475]: time="2025-02-13T15:10:41.865459646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:41.867697 containerd[1475]: time="2025-02-13T15:10:41.867606515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:41.867697 containerd[1475]: time="2025-02-13T15:10:41.867675078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:41.867697 containerd[1475]: time="2025-02-13T15:10:41.867690371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:41.868892 containerd[1475]: time="2025-02-13T15:10:41.868162280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:41.868892 containerd[1475]: time="2025-02-13T15:10:41.868811284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:41.868892 containerd[1475]: time="2025-02-13T15:10:41.868840993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:41.868892 containerd[1475]: time="2025-02-13T15:10:41.868832568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:41.869422 containerd[1475]: time="2025-02-13T15:10:41.869379867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:41.892713 kubelet[2268]: E0213 15:10:41.892675 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Feb 13 15:10:41.894844 systemd[1]: Started cri-containerd-106fd8cf1758fb473a63695fcbae244b7f8fe296cb867e08317084bba0b6412e.scope - libcontainer container 106fd8cf1758fb473a63695fcbae244b7f8fe296cb867e08317084bba0b6412e. Feb 13 15:10:41.896264 systemd[1]: Started cri-containerd-20f708ad4080ada029a6d37310f97eda176c70c758d00094d6436700e4a7e440.scope - libcontainer container 20f708ad4080ada029a6d37310f97eda176c70c758d00094d6436700e4a7e440. Feb 13 15:10:41.898673 systemd[1]: Started cri-containerd-e51ade07b121116ec793c88aadf74c43fc61d699c4072f8adb6a07b8a95af7f0.scope - libcontainer container e51ade07b121116ec793c88aadf74c43fc61d699c4072f8adb6a07b8a95af7f0. Feb 13 15:10:41.931490 containerd[1475]: time="2025-02-13T15:10:41.930967282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a02d69d6090aa826a3683016e399c74b,Namespace:kube-system,Attempt:0,} returns sandbox id \"106fd8cf1758fb473a63695fcbae244b7f8fe296cb867e08317084bba0b6412e\"" Feb 13 15:10:41.935354 kubelet[2268]: E0213 15:10:41.935324 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:41.938691 containerd[1475]: time="2025-02-13T15:10:41.938583590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51ade07b121116ec793c88aadf74c43fc61d699c4072f8adb6a07b8a95af7f0\"" Feb 13 15:10:41.939611 kubelet[2268]: E0213 15:10:41.939590 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:41.940193 containerd[1475]: time="2025-02-13T15:10:41.939931313Z" level=info msg="CreateContainer within sandbox \"106fd8cf1758fb473a63695fcbae244b7f8fe296cb867e08317084bba0b6412e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:10:41.941018 containerd[1475]: time="2025-02-13T15:10:41.940986899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"20f708ad4080ada029a6d37310f97eda176c70c758d00094d6436700e4a7e440\"" Feb 13 15:10:41.941669 kubelet[2268]: E0213 15:10:41.941627 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:41.943111 containerd[1475]: time="2025-02-13T15:10:41.943084733Z" level=info msg="CreateContainer within sandbox \"e51ade07b121116ec793c88aadf74c43fc61d699c4072f8adb6a07b8a95af7f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:10:41.944232 containerd[1475]: time="2025-02-13T15:10:41.944187158Z" level=info msg="CreateContainer within sandbox \"20f708ad4080ada029a6d37310f97eda176c70c758d00094d6436700e4a7e440\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:10:41.968072 containerd[1475]: time="2025-02-13T15:10:41.967921640Z" level=info msg="CreateContainer within sandbox \"20f708ad4080ada029a6d37310f97eda176c70c758d00094d6436700e4a7e440\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8af0c8213813c7d92ab70ddab7df7401a6ceacf9a9b40ac1bbde3b8f0e7f2333\"" Feb 13 15:10:41.968690 containerd[1475]: time="2025-02-13T15:10:41.968666599Z" level=info msg="CreateContainer within sandbox \"106fd8cf1758fb473a63695fcbae244b7f8fe296cb867e08317084bba0b6412e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cc28834f62be6a8adea49c423cce5822d4f71c8a9988241e82e12ea7dbf4404\"" Feb 13 15:10:41.968977 containerd[1475]: time="2025-02-13T15:10:41.968956820Z" level=info msg="StartContainer for \"8af0c8213813c7d92ab70ddab7df7401a6ceacf9a9b40ac1bbde3b8f0e7f2333\"" Feb 13 15:10:41.969278 containerd[1475]: time="2025-02-13T15:10:41.969251434Z" level=info msg="StartContainer for \"0cc28834f62be6a8adea49c423cce5822d4f71c8a9988241e82e12ea7dbf4404\"" Feb 13 15:10:41.970076 containerd[1475]: time="2025-02-13T15:10:41.970039878Z" level=info msg="CreateContainer within sandbox \"e51ade07b121116ec793c88aadf74c43fc61d699c4072f8adb6a07b8a95af7f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e853499b302d2d618c8cb886f286bdc9215c630342bc8b818ae995358514d8c1\"" Feb 13 15:10:41.970536 containerd[1475]: time="2025-02-13T15:10:41.970476129Z" level=info msg="StartContainer for \"e853499b302d2d618c8cb886f286bdc9215c630342bc8b818ae995358514d8c1\"" Feb 13 15:10:41.995571 kubelet[2268]: I0213 15:10:41.995477 2268 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:10:41.995843 kubelet[2268]: E0213 15:10:41.995813 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 15:10:41.996826 systemd[1]: Started cri-containerd-0cc28834f62be6a8adea49c423cce5822d4f71c8a9988241e82e12ea7dbf4404.scope - libcontainer container 0cc28834f62be6a8adea49c423cce5822d4f71c8a9988241e82e12ea7dbf4404. Feb 13 15:10:41.997786 systemd[1]: Started cri-containerd-8af0c8213813c7d92ab70ddab7df7401a6ceacf9a9b40ac1bbde3b8f0e7f2333.scope - libcontainer container 8af0c8213813c7d92ab70ddab7df7401a6ceacf9a9b40ac1bbde3b8f0e7f2333. Feb 13 15:10:41.998608 systemd[1]: Started cri-containerd-e853499b302d2d618c8cb886f286bdc9215c630342bc8b818ae995358514d8c1.scope - libcontainer container e853499b302d2d618c8cb886f286bdc9215c630342bc8b818ae995358514d8c1. Feb 13 15:10:42.050912 containerd[1475]: time="2025-02-13T15:10:42.047273632Z" level=info msg="StartContainer for \"0cc28834f62be6a8adea49c423cce5822d4f71c8a9988241e82e12ea7dbf4404\" returns successfully" Feb 13 15:10:42.061649 containerd[1475]: time="2025-02-13T15:10:42.058927268Z" level=info msg="StartContainer for \"8af0c8213813c7d92ab70ddab7df7401a6ceacf9a9b40ac1bbde3b8f0e7f2333\" returns successfully" Feb 13 15:10:42.061649 containerd[1475]: time="2025-02-13T15:10:42.059032869Z" level=info msg="StartContainer for \"e853499b302d2d618c8cb886f286bdc9215c630342bc8b818ae995358514d8c1\" returns successfully" Feb 13 15:10:42.515210 kubelet[2268]: E0213 15:10:42.515167 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:42.518497 kubelet[2268]: E0213 15:10:42.518461 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:42.520385 kubelet[2268]: E0213 15:10:42.520362 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:43.522333 kubelet[2268]: E0213 15:10:43.522304 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:43.574831 kubelet[2268]: E0213 15:10:43.574794 2268 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:10:43.597666 kubelet[2268]: I0213 15:10:43.597631 2268 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:10:43.604866 kubelet[2268]: I0213 15:10:43.604834 2268 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:10:44.472443 kubelet[2268]: I0213 15:10:44.472391 2268 apiserver.go:52] "Watching apiserver" Feb 13 15:10:44.488655 kubelet[2268]: I0213 15:10:44.488619 2268 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:10:46.147914 systemd[1]: Reloading requested from client PID 2543 ('systemctl') (unit session-7.scope)... Feb 13 15:10:46.147930 systemd[1]: Reloading... Feb 13 15:10:46.207746 zram_generator::config[2583]: No configuration found. Feb 13 15:10:46.393726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:46.460528 systemd[1]: Reloading finished in 312 ms. Feb 13 15:10:46.493948 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:46.502893 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:10:46.503096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:46.512984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:46.602655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:46.612957 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:10:46.657874 kubelet[2624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:46.657874 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:10:46.657874 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:46.657874 kubelet[2624]: I0213 15:10:46.657849 2624 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:10:46.662440 kubelet[2624]: I0213 15:10:46.662393 2624 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:10:46.662440 kubelet[2624]: I0213 15:10:46.662427 2624 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:10:46.662694 kubelet[2624]: I0213 15:10:46.662669 2624 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:10:46.664232 kubelet[2624]: I0213 15:10:46.664193 2624 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:10:46.666477 kubelet[2624]: I0213 15:10:46.666341 2624 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:10:46.676335 kubelet[2624]: I0213 15:10:46.676163 2624 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:10:46.676443 kubelet[2624]: I0213 15:10:46.676387 2624 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:10:46.676566 kubelet[2624]: I0213 15:10:46.676545 2624 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:10:46.676566 kubelet[2624]: I0213 15:10:46.676567 2624 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:10:46.676699 kubelet[2624]: I0213 15:10:46.676577 2624 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:10:46.676699 kubelet[2624]: I0213 15:10:46.676608 2624 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:46.676744 kubelet[2624]: I0213 15:10:46.676713 2624 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:10:46.676744 kubelet[2624]: I0213 15:10:46.676727 2624 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:10:46.676796 kubelet[2624]: I0213 15:10:46.676757 2624 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:10:46.676796 kubelet[2624]: I0213 15:10:46.676773 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.679018 2624 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.679188 2624 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.679583 2624 server.go:1256] "Started kubelet" Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.679680 2624 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.679872 2624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.680067 2624 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.680549 2624 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:10:46.681477 kubelet[2624]: I0213 15:10:46.681341 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:10:46.682963 kubelet[2624]: I0213 15:10:46.682942 2624 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:10:46.683072 kubelet[2624]: I0213 15:10:46.683051 2624 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:10:46.683243 kubelet[2624]: I0213 15:10:46.683225 2624 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:10:46.697861 kubelet[2624]: I0213 15:10:46.697832 2624 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:10:46.697952 kubelet[2624]: I0213 15:10:46.697913 2624 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:10:46.700606 kubelet[2624]: E0213 15:10:46.700544 2624 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:10:46.700895 kubelet[2624]: I0213 15:10:46.700871 2624 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:10:46.705777 kubelet[2624]: I0213 15:10:46.705593 2624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:10:46.707875 kubelet[2624]: I0213 15:10:46.707851 2624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:10:46.708992 kubelet[2624]: I0213 15:10:46.708890 2624 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:10:46.708992 kubelet[2624]: I0213 15:10:46.708927 2624 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:10:46.708992 kubelet[2624]: E0213 15:10:46.708987 2624 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:10:46.740126 kubelet[2624]: I0213 15:10:46.740037 2624 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:10:46.740126 kubelet[2624]: I0213 15:10:46.740058 2624 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:10:46.740126 kubelet[2624]: I0213 15:10:46.740074 2624 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:46.740267 kubelet[2624]: I0213 15:10:46.740222 2624 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:10:46.740267 kubelet[2624]: I0213 15:10:46.740241 2624 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:10:46.740267 kubelet[2624]: I0213 15:10:46.740247 2624 policy_none.go:49] "None policy: Start" Feb 13 15:10:46.740901 kubelet[2624]: I0213 15:10:46.740851 2624 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:10:46.740901 kubelet[2624]: I0213 15:10:46.740879 2624 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:10:46.741041 kubelet[2624]: I0213 15:10:46.741026 2624 state_mem.go:75] "Updated machine memory state" Feb 13 15:10:46.748811 kubelet[2624]: I0213 15:10:46.748631 2624 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:10:46.749548 kubelet[2624]: I0213 15:10:46.749527 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:10:46.786919 kubelet[2624]: I0213 15:10:46.786892 2624 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:10:46.794270 kubelet[2624]: I0213 15:10:46.794205 2624 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:10:46.794751 kubelet[2624]: I0213 15:10:46.794283 2624 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:10:46.809510 kubelet[2624]: I0213 15:10:46.809487 2624 topology_manager.go:215] "Topology Admit Handler" podUID="a02d69d6090aa826a3683016e399c74b" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:10:46.809623 kubelet[2624]: I0213 15:10:46.809587 2624 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:10:46.809701 kubelet[2624]: I0213 15:10:46.809684 2624 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:10:46.883583 kubelet[2624]: I0213 15:10:46.883544 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a02d69d6090aa826a3683016e399c74b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a02d69d6090aa826a3683016e399c74b\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:46.883733 kubelet[2624]: I0213 15:10:46.883597 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:46.883733 kubelet[2624]: I0213 15:10:46.883617 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:46.883733 kubelet[2624]: I0213 15:10:46.883669 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:46.883733 kubelet[2624]: I0213 15:10:46.883696 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:10:46.883733 kubelet[2624]: I0213 15:10:46.883718 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a02d69d6090aa826a3683016e399c74b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a02d69d6090aa826a3683016e399c74b\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:46.883847 kubelet[2624]: I0213 15:10:46.883789 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a02d69d6090aa826a3683016e399c74b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a02d69d6090aa826a3683016e399c74b\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:46.883847 kubelet[2624]: I0213 15:10:46.883825 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:46.883890 kubelet[2624]: I0213 15:10:46.883858 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:47.118090 kubelet[2624]: E0213 15:10:47.118059 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:47.118534 kubelet[2624]: E0213 15:10:47.118444 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:47.118634 kubelet[2624]: E0213 15:10:47.118494 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:47.678862 kubelet[2624]: I0213 15:10:47.678814 2624 apiserver.go:52] "Watching apiserver" Feb 13 15:10:47.683599 kubelet[2624]: I0213 15:10:47.683550 2624 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:10:47.722555 kubelet[2624]: E0213 15:10:47.722433 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:47.722555 kubelet[2624]: E0213 15:10:47.722499 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:47.732492 kubelet[2624]: E0213 15:10:47.731954 2624 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:47.732492 kubelet[2624]: E0213 15:10:47.732423 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:47.766464 kubelet[2624]: I0213 15:10:47.766414 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.766357625 podStartE2EDuration="1.766357625s" podCreationTimestamp="2025-02-13 15:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:47.766245951 +0000 UTC m=+1.149645622" watchObservedRunningTime="2025-02-13 15:10:47.766357625 +0000 UTC m=+1.149757296" Feb 13 15:10:47.774197 kubelet[2624]: I0213 15:10:47.774144 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.774105453 podStartE2EDuration="1.774105453s" podCreationTimestamp="2025-02-13 15:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:47.774038385 +0000 UTC m=+1.157438056" watchObservedRunningTime="2025-02-13 15:10:47.774105453 +0000 UTC m=+1.157505124" Feb 13 15:10:47.782214 kubelet[2624]: I0213 15:10:47.782174 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.782064399 podStartE2EDuration="1.782064399s" podCreationTimestamp="2025-02-13 15:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:47.782042976 +0000 UTC m=+1.165442647" watchObservedRunningTime="2025-02-13 15:10:47.782064399 +0000 UTC m=+1.165464070" Feb 13 15:10:48.725041 kubelet[2624]: E0213 15:10:48.725000 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:48.728486 kubelet[2624]: E0213 15:10:48.725096 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:49.075559 kubelet[2624]: E0213 15:10:49.071677 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:50.632770 kubelet[2624]: E0213 15:10:50.632709 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:50.921625 sudo[1649]: pam_unix(sudo:session): session closed for user root Feb 13 15:10:50.922838 sshd[1648]: Connection closed by 10.0.0.1 port 56836 Feb 13 15:10:50.923400 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:50.926428 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:56836.service: Deactivated successfully. Feb 13 15:10:50.928610 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:10:50.928806 systemd[1]: session-7.scope: Consumed 6.422s CPU time, 192.9M memory peak, 0B memory swap peak. Feb 13 15:10:50.932564 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:10:50.935476 systemd-logind[1454]: Removed session 7. Feb 13 15:10:51.892868 kubelet[2624]: E0213 15:10:51.892833 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:57.992993 kubelet[2624]: I0213 15:10:57.992949 2624 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:10:58.005277 containerd[1475]: time="2025-02-13T15:10:58.005218237Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:10:58.005609 kubelet[2624]: I0213 15:10:58.005577 2624 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:10:59.006449 kubelet[2624]: I0213 15:10:59.006030 2624 topology_manager.go:215] "Topology Admit Handler" podUID="7b62f916-1eac-4918-bc02-449561e8a8bf" podNamespace="kube-system" podName="kube-proxy-tc2jx" Feb 13 15:10:59.017956 systemd[1]: Created slice kubepods-besteffort-pod7b62f916_1eac_4918_bc02_449561e8a8bf.slice - libcontainer container kubepods-besteffort-pod7b62f916_1eac_4918_bc02_449561e8a8bf.slice. Feb 13 15:10:59.071654 kubelet[2624]: I0213 15:10:59.071383 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b62f916-1eac-4918-bc02-449561e8a8bf-kube-proxy\") pod \"kube-proxy-tc2jx\" (UID: \"7b62f916-1eac-4918-bc02-449561e8a8bf\") " pod="kube-system/kube-proxy-tc2jx" Feb 13 15:10:59.071654 kubelet[2624]: I0213 15:10:59.071430 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7nks\" (UniqueName: \"kubernetes.io/projected/7b62f916-1eac-4918-bc02-449561e8a8bf-kube-api-access-t7nks\") pod \"kube-proxy-tc2jx\" (UID: \"7b62f916-1eac-4918-bc02-449561e8a8bf\") " pod="kube-system/kube-proxy-tc2jx" Feb 13 15:10:59.071654 kubelet[2624]: I0213 15:10:59.071452 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b62f916-1eac-4918-bc02-449561e8a8bf-xtables-lock\") pod \"kube-proxy-tc2jx\" (UID: \"7b62f916-1eac-4918-bc02-449561e8a8bf\") " pod="kube-system/kube-proxy-tc2jx" Feb 13 15:10:59.071654 kubelet[2624]: I0213 15:10:59.071594 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b62f916-1eac-4918-bc02-449561e8a8bf-lib-modules\") pod \"kube-proxy-tc2jx\" (UID: \"7b62f916-1eac-4918-bc02-449561e8a8bf\") " pod="kube-system/kube-proxy-tc2jx" Feb 13 15:10:59.080468 kubelet[2624]: I0213 15:10:59.080421 2624 topology_manager.go:215] "Topology Admit Handler" podUID="01e0eb6f-6127-443a-891d-9b2aeeb7c196" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-p8nmk" Feb 13 15:10:59.090287 systemd[1]: Created slice kubepods-besteffort-pod01e0eb6f_6127_443a_891d_9b2aeeb7c196.slice - libcontainer container kubepods-besteffort-pod01e0eb6f_6127_443a_891d_9b2aeeb7c196.slice. Feb 13 15:10:59.095627 kubelet[2624]: E0213 15:10:59.095598 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:59.172484 kubelet[2624]: I0213 15:10:59.172444 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01e0eb6f-6127-443a-891d-9b2aeeb7c196-var-lib-calico\") pod \"tigera-operator-c7ccbd65-p8nmk\" (UID: \"01e0eb6f-6127-443a-891d-9b2aeeb7c196\") " pod="tigera-operator/tigera-operator-c7ccbd65-p8nmk" Feb 13 15:10:59.178748 kubelet[2624]: I0213 15:10:59.178638 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkg6v\" (UniqueName: \"kubernetes.io/projected/01e0eb6f-6127-443a-891d-9b2aeeb7c196-kube-api-access-xkg6v\") pod \"tigera-operator-c7ccbd65-p8nmk\" (UID: \"01e0eb6f-6127-443a-891d-9b2aeeb7c196\") " pod="tigera-operator/tigera-operator-c7ccbd65-p8nmk" Feb 13 15:10:59.327673 kubelet[2624]: E0213 15:10:59.327540 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:59.328292 containerd[1475]: time="2025-02-13T15:10:59.328243393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tc2jx,Uid:7b62f916-1eac-4918-bc02-449561e8a8bf,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:59.365974 containerd[1475]: time="2025-02-13T15:10:59.365786848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:59.365974 containerd[1475]: time="2025-02-13T15:10:59.365900503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:59.365974 containerd[1475]: time="2025-02-13T15:10:59.365912584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:59.366158 containerd[1475]: time="2025-02-13T15:10:59.365996915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:59.384833 systemd[1]: Started cri-containerd-ff671d9f431d1b158164c5e92b3dc3bc042ee9f3bd819c822f49805e93217e96.scope - libcontainer container ff671d9f431d1b158164c5e92b3dc3bc042ee9f3bd819c822f49805e93217e96. Feb 13 15:10:59.394355 containerd[1475]: time="2025-02-13T15:10:59.394007567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-p8nmk,Uid:01e0eb6f-6127-443a-891d-9b2aeeb7c196,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:10:59.405536 containerd[1475]: time="2025-02-13T15:10:59.405462381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tc2jx,Uid:7b62f916-1eac-4918-bc02-449561e8a8bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff671d9f431d1b158164c5e92b3dc3bc042ee9f3bd819c822f49805e93217e96\"" Feb 13 15:10:59.412876 kubelet[2624]: E0213 15:10:59.412766 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:10:59.418739 containerd[1475]: time="2025-02-13T15:10:59.418596813Z" level=info msg="CreateContainer within sandbox \"ff671d9f431d1b158164c5e92b3dc3bc042ee9f3bd819c822f49805e93217e96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:10:59.420906 containerd[1475]: time="2025-02-13T15:10:59.420183620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:59.420906 containerd[1475]: time="2025-02-13T15:10:59.420732971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:59.420906 containerd[1475]: time="2025-02-13T15:10:59.420747533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:59.421090 containerd[1475]: time="2025-02-13T15:10:59.421045892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:59.438825 systemd[1]: Started cri-containerd-a3dc62735cd98e7cd35e4865c0f678753198599f61511e8fb210d6bd72241623.scope - libcontainer container a3dc62735cd98e7cd35e4865c0f678753198599f61511e8fb210d6bd72241623. Feb 13 15:10:59.474104 containerd[1475]: time="2025-02-13T15:10:59.473924426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-p8nmk,Uid:01e0eb6f-6127-443a-891d-9b2aeeb7c196,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a3dc62735cd98e7cd35e4865c0f678753198599f61511e8fb210d6bd72241623\"" Feb 13 15:10:59.478537 containerd[1475]: time="2025-02-13T15:10:59.478475259Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:10:59.532512 containerd[1475]: time="2025-02-13T15:10:59.532454857Z" level=info msg="CreateContainer within sandbox \"ff671d9f431d1b158164c5e92b3dc3bc042ee9f3bd819c822f49805e93217e96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"35b4903d406ac006408af1250cdf52a30e9eba84b6049aec701aac17d7a64a66\"" Feb 13 15:10:59.533211 containerd[1475]: time="2025-02-13T15:10:59.533122744Z" level=info msg="StartContainer for \"35b4903d406ac006408af1250cdf52a30e9eba84b6049aec701aac17d7a64a66\"" Feb 13 15:10:59.568910 systemd[1]: Started cri-containerd-35b4903d406ac006408af1250cdf52a30e9eba84b6049aec701aac17d7a64a66.scope - libcontainer container 35b4903d406ac006408af1250cdf52a30e9eba84b6049aec701aac17d7a64a66. Feb 13 15:10:59.606058 containerd[1475]: time="2025-02-13T15:10:59.604434841Z" level=info msg="StartContainer for \"35b4903d406ac006408af1250cdf52a30e9eba84b6049aec701aac17d7a64a66\" returns successfully" Feb 13 15:10:59.746064 kubelet[2624]: E0213 15:10:59.746026 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:00.642083 kubelet[2624]: E0213 15:11:00.642027 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:00.653434 kubelet[2624]: I0213 15:11:00.653394 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tc2jx" podStartSLOduration=2.653357739 podStartE2EDuration="2.653357739s" podCreationTimestamp="2025-02-13 15:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:59.766242736 +0000 UTC m=+13.149642487" watchObservedRunningTime="2025-02-13 15:11:00.653357739 +0000 UTC m=+14.036757410" Feb 13 15:11:00.748464 kubelet[2624]: E0213 15:11:00.748431 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:01.906260 kubelet[2624]: E0213 15:11:01.905893 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:02.095624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199582582.mount: Deactivated successfully. Feb 13 15:11:02.145677 update_engine[1456]: I20250213 15:11:02.145035 1456 update_attempter.cc:509] Updating boot flags... Feb 13 15:11:02.181743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2965) Feb 13 15:11:02.203267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2967) Feb 13 15:11:02.545112 containerd[1475]: time="2025-02-13T15:11:02.544983834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:02.545861 containerd[1475]: time="2025-02-13T15:11:02.545817007Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:11:02.546778 containerd[1475]: time="2025-02-13T15:11:02.546743271Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:02.549987 containerd[1475]: time="2025-02-13T15:11:02.549950629Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:02.550754 containerd[1475]: time="2025-02-13T15:11:02.550719555Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.072176007s" Feb 13 15:11:02.550796 containerd[1475]: time="2025-02-13T15:11:02.550753639Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:11:02.573094 containerd[1475]: time="2025-02-13T15:11:02.572994326Z" level=info msg="CreateContainer within sandbox \"a3dc62735cd98e7cd35e4865c0f678753198599f61511e8fb210d6bd72241623\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:11:02.584315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748901760.mount: Deactivated successfully. Feb 13 15:11:02.589055 containerd[1475]: time="2025-02-13T15:11:02.588996716Z" level=info msg="CreateContainer within sandbox \"a3dc62735cd98e7cd35e4865c0f678753198599f61511e8fb210d6bd72241623\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4724b2a7c93190e60fe715940e636543a6901d5c7490c34788bfcd838e78f22b\"" Feb 13 15:11:02.590355 containerd[1475]: time="2025-02-13T15:11:02.589479010Z" level=info msg="StartContainer for \"4724b2a7c93190e60fe715940e636543a6901d5c7490c34788bfcd838e78f22b\"" Feb 13 15:11:02.619823 systemd[1]: Started cri-containerd-4724b2a7c93190e60fe715940e636543a6901d5c7490c34788bfcd838e78f22b.scope - libcontainer container 4724b2a7c93190e60fe715940e636543a6901d5c7490c34788bfcd838e78f22b. Feb 13 15:11:02.677147 containerd[1475]: time="2025-02-13T15:11:02.676957314Z" level=info msg="StartContainer for \"4724b2a7c93190e60fe715940e636543a6901d5c7490c34788bfcd838e78f22b\" returns successfully" Feb 13 15:11:02.782710 kubelet[2624]: I0213 15:11:02.781492 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-p8nmk" podStartSLOduration=0.704778161 podStartE2EDuration="3.781451561s" podCreationTimestamp="2025-02-13 15:10:59 +0000 UTC" firstStartedPulling="2025-02-13 15:10:59.47587276 +0000 UTC m=+12.859272431" lastFinishedPulling="2025-02-13 15:11:02.5525462 +0000 UTC m=+15.935945831" observedRunningTime="2025-02-13 15:11:02.780772325 +0000 UTC m=+16.164171996" watchObservedRunningTime="2025-02-13 15:11:02.781451561 +0000 UTC m=+16.164851232" Feb 13 15:11:07.100065 kubelet[2624]: I0213 15:11:07.100016 2624 topology_manager.go:215] "Topology Admit Handler" podUID="8c7db46e-56e3-4e1e-8307-8f84122877ba" podNamespace="calico-system" podName="calico-typha-5d67c7f966-db2ng" Feb 13 15:11:07.111954 systemd[1]: Created slice kubepods-besteffort-pod8c7db46e_56e3_4e1e_8307_8f84122877ba.slice - libcontainer container kubepods-besteffort-pod8c7db46e_56e3_4e1e_8307_8f84122877ba.slice. Feb 13 15:11:07.133106 kubelet[2624]: I0213 15:11:07.133060 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c7db46e-56e3-4e1e-8307-8f84122877ba-tigera-ca-bundle\") pod \"calico-typha-5d67c7f966-db2ng\" (UID: \"8c7db46e-56e3-4e1e-8307-8f84122877ba\") " pod="calico-system/calico-typha-5d67c7f966-db2ng" Feb 13 15:11:07.133106 kubelet[2624]: I0213 15:11:07.133110 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzkz6\" (UniqueName: \"kubernetes.io/projected/8c7db46e-56e3-4e1e-8307-8f84122877ba-kube-api-access-pzkz6\") pod \"calico-typha-5d67c7f966-db2ng\" (UID: \"8c7db46e-56e3-4e1e-8307-8f84122877ba\") " pod="calico-system/calico-typha-5d67c7f966-db2ng" Feb 13 15:11:07.133260 kubelet[2624]: I0213 15:11:07.133142 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c7db46e-56e3-4e1e-8307-8f84122877ba-typha-certs\") pod \"calico-typha-5d67c7f966-db2ng\" (UID: \"8c7db46e-56e3-4e1e-8307-8f84122877ba\") " pod="calico-system/calico-typha-5d67c7f966-db2ng" Feb 13 15:11:07.293026 kubelet[2624]: I0213 15:11:07.292985 2624 topology_manager.go:215] "Topology Admit Handler" podUID="6e64515d-6ea9-4651-b9ad-fc36ab2cd484" podNamespace="calico-system" podName="calico-node-bplkh" Feb 13 15:11:07.301212 systemd[1]: Created slice kubepods-besteffort-pod6e64515d_6ea9_4651_b9ad_fc36ab2cd484.slice - libcontainer container kubepods-besteffort-pod6e64515d_6ea9_4651_b9ad_fc36ab2cd484.slice. Feb 13 15:11:07.334031 kubelet[2624]: I0213 15:11:07.333985 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-var-run-calico\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334031 kubelet[2624]: I0213 15:11:07.334036 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-lib-modules\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334189 kubelet[2624]: I0213 15:11:07.334057 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-var-lib-calico\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334189 kubelet[2624]: I0213 15:11:07.334079 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-xtables-lock\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334239 kubelet[2624]: I0213 15:11:07.334179 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-policysync\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334239 kubelet[2624]: I0213 15:11:07.334228 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-flexvol-driver-host\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334293 kubelet[2624]: I0213 15:11:07.334275 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-tigera-ca-bundle\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334315 kubelet[2624]: I0213 15:11:07.334300 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-node-certs\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334339 kubelet[2624]: I0213 15:11:07.334320 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-cni-bin-dir\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334365 kubelet[2624]: I0213 15:11:07.334341 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrmd\" (UniqueName: \"kubernetes.io/projected/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-kube-api-access-dcrmd\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334365 kubelet[2624]: I0213 15:11:07.334361 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-cni-log-dir\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.334411 kubelet[2624]: I0213 15:11:07.334381 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6e64515d-6ea9-4651-b9ad-fc36ab2cd484-cni-net-dir\") pod \"calico-node-bplkh\" (UID: \"6e64515d-6ea9-4651-b9ad-fc36ab2cd484\") " pod="calico-system/calico-node-bplkh" Feb 13 15:11:07.417041 kubelet[2624]: E0213 15:11:07.417001 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:07.418258 containerd[1475]: time="2025-02-13T15:11:07.417764516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d67c7f966-db2ng,Uid:8c7db46e-56e3-4e1e-8307-8f84122877ba,Namespace:calico-system,Attempt:0,}" Feb 13 15:11:07.440540 kubelet[2624]: E0213 15:11:07.440518 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.440540 kubelet[2624]: W0213 15:11:07.440537 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.440706 kubelet[2624]: E0213 15:11:07.440563 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.440794 kubelet[2624]: E0213 15:11:07.440782 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.440840 kubelet[2624]: W0213 15:11:07.440794 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.440840 kubelet[2624]: E0213 15:11:07.440823 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.448764 kubelet[2624]: E0213 15:11:07.448744 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.448963 kubelet[2624]: W0213 15:11:07.448849 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.448963 kubelet[2624]: E0213 15:11:07.448874 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.454974 containerd[1475]: time="2025-02-13T15:11:07.454867058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:07.454974 containerd[1475]: time="2025-02-13T15:11:07.454936464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:07.454974 containerd[1475]: time="2025-02-13T15:11:07.454956346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:07.455146 containerd[1475]: time="2025-02-13T15:11:07.455043794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:07.482847 systemd[1]: Started cri-containerd-4ebdd1e80a4186fab30678b8fce2aa6005d42e422ba8313bab15aaac7f3a40d0.scope - libcontainer container 4ebdd1e80a4186fab30678b8fce2aa6005d42e422ba8313bab15aaac7f3a40d0. Feb 13 15:11:07.500733 kubelet[2624]: I0213 15:11:07.500420 2624 topology_manager.go:215] "Topology Admit Handler" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" podNamespace="calico-system" podName="csi-node-driver-kwm8r" Feb 13 15:11:07.502339 kubelet[2624]: E0213 15:11:07.502129 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:07.527925 containerd[1475]: time="2025-02-13T15:11:07.527883077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d67c7f966-db2ng,Uid:8c7db46e-56e3-4e1e-8307-8f84122877ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ebdd1e80a4186fab30678b8fce2aa6005d42e422ba8313bab15aaac7f3a40d0\"" Feb 13 15:11:07.528468 kubelet[2624]: E0213 15:11:07.528446 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.528551 kubelet[2624]: W0213 15:11:07.528467 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.528551 kubelet[2624]: E0213 15:11:07.528488 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.529057 kubelet[2624]: E0213 15:11:07.528555 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:07.529264 containerd[1475]: time="2025-02-13T15:11:07.529234556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:11:07.529751 kubelet[2624]: E0213 15:11:07.529734 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.529801 kubelet[2624]: W0213 15:11:07.529751 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.529801 kubelet[2624]: E0213 15:11:07.529766 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.529993 kubelet[2624]: E0213 15:11:07.529978 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.529993 kubelet[2624]: W0213 15:11:07.529990 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.530073 kubelet[2624]: E0213 15:11:07.530002 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.530616 kubelet[2624]: E0213 15:11:07.530600 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.530616 kubelet[2624]: W0213 15:11:07.530613 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.530720 kubelet[2624]: E0213 15:11:07.530628 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.530926 kubelet[2624]: E0213 15:11:07.530871 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.530926 kubelet[2624]: W0213 15:11:07.530883 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.530926 kubelet[2624]: E0213 15:11:07.530896 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.531062 kubelet[2624]: E0213 15:11:07.531049 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.531062 kubelet[2624]: W0213 15:11:07.531060 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.531122 kubelet[2624]: E0213 15:11:07.531071 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.531225 kubelet[2624]: E0213 15:11:07.531209 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.531225 kubelet[2624]: W0213 15:11:07.531219 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.531225 kubelet[2624]: E0213 15:11:07.531228 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.531364 kubelet[2624]: E0213 15:11:07.531358 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.531400 kubelet[2624]: W0213 15:11:07.531366 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.531400 kubelet[2624]: E0213 15:11:07.531375 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.531927 kubelet[2624]: E0213 15:11:07.531737 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.531927 kubelet[2624]: W0213 15:11:07.531914 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.532109 kubelet[2624]: E0213 15:11:07.531934 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.532267 kubelet[2624]: E0213 15:11:07.532250 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.532267 kubelet[2624]: W0213 15:11:07.532265 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.532379 kubelet[2624]: E0213 15:11:07.532286 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.532477 kubelet[2624]: E0213 15:11:07.532463 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.532477 kubelet[2624]: W0213 15:11:07.532474 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.532536 kubelet[2624]: E0213 15:11:07.532485 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.532953 kubelet[2624]: E0213 15:11:07.532935 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.532953 kubelet[2624]: W0213 15:11:07.532952 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.533037 kubelet[2624]: E0213 15:11:07.532967 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.533150 kubelet[2624]: E0213 15:11:07.533131 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.533150 kubelet[2624]: W0213 15:11:07.533145 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.533207 kubelet[2624]: E0213 15:11:07.533155 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.533915 kubelet[2624]: E0213 15:11:07.533896 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.533915 kubelet[2624]: W0213 15:11:07.533913 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.533997 kubelet[2624]: E0213 15:11:07.533927 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.534147 kubelet[2624]: E0213 15:11:07.534130 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.534189 kubelet[2624]: W0213 15:11:07.534147 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.534189 kubelet[2624]: E0213 15:11:07.534160 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.534827 kubelet[2624]: E0213 15:11:07.534789 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.534827 kubelet[2624]: W0213 15:11:07.534823 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.534907 kubelet[2624]: E0213 15:11:07.534838 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.535087 kubelet[2624]: E0213 15:11:07.535044 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.535087 kubelet[2624]: W0213 15:11:07.535056 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.535087 kubelet[2624]: E0213 15:11:07.535068 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.535559 kubelet[2624]: E0213 15:11:07.535539 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.535559 kubelet[2624]: W0213 15:11:07.535554 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.535630 kubelet[2624]: E0213 15:11:07.535567 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.535769 kubelet[2624]: E0213 15:11:07.535756 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.535769 kubelet[2624]: W0213 15:11:07.535767 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.535823 kubelet[2624]: E0213 15:11:07.535778 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.535921 kubelet[2624]: E0213 15:11:07.535910 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.535921 kubelet[2624]: W0213 15:11:07.535920 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.535973 kubelet[2624]: E0213 15:11:07.535930 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.536173 kubelet[2624]: E0213 15:11:07.536158 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.536173 kubelet[2624]: W0213 15:11:07.536169 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.536240 kubelet[2624]: E0213 15:11:07.536179 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.536240 kubelet[2624]: I0213 15:11:07.536205 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5494d8d-0818-4dbe-926f-03408aa43bf9-varrun\") pod \"csi-node-driver-kwm8r\" (UID: \"a5494d8d-0818-4dbe-926f-03408aa43bf9\") " pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:07.536355 kubelet[2624]: E0213 15:11:07.536336 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.536355 kubelet[2624]: W0213 15:11:07.536347 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.536412 kubelet[2624]: E0213 15:11:07.536358 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.536412 kubelet[2624]: I0213 15:11:07.536376 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvwl\" (UniqueName: \"kubernetes.io/projected/a5494d8d-0818-4dbe-926f-03408aa43bf9-kube-api-access-fzvwl\") pod \"csi-node-driver-kwm8r\" (UID: \"a5494d8d-0818-4dbe-926f-03408aa43bf9\") " pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:07.536544 kubelet[2624]: E0213 15:11:07.536531 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.536544 kubelet[2624]: W0213 15:11:07.536542 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.536589 kubelet[2624]: E0213 15:11:07.536556 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.536589 kubelet[2624]: I0213 15:11:07.536574 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5494d8d-0818-4dbe-926f-03408aa43bf9-kubelet-dir\") pod \"csi-node-driver-kwm8r\" (UID: \"a5494d8d-0818-4dbe-926f-03408aa43bf9\") " pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:07.536745 kubelet[2624]: E0213 15:11:07.536731 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.536800 kubelet[2624]: W0213 15:11:07.536744 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.536800 kubelet[2624]: E0213 15:11:07.536767 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.536800 kubelet[2624]: I0213 15:11:07.536784 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5494d8d-0818-4dbe-926f-03408aa43bf9-socket-dir\") pod \"csi-node-driver-kwm8r\" (UID: \"a5494d8d-0818-4dbe-926f-03408aa43bf9\") " pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:07.536946 kubelet[2624]: E0213 15:11:07.536933 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.536946 kubelet[2624]: W0213 15:11:07.536945 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.537000 kubelet[2624]: E0213 15:11:07.536960 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.537000 kubelet[2624]: I0213 15:11:07.536977 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5494d8d-0818-4dbe-926f-03408aa43bf9-registration-dir\") pod \"csi-node-driver-kwm8r\" (UID: \"a5494d8d-0818-4dbe-926f-03408aa43bf9\") " pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:07.537150 kubelet[2624]: E0213 15:11:07.537134 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.537150 kubelet[2624]: W0213 15:11:07.537148 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.537200 kubelet[2624]: E0213 15:11:07.537162 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.537297 kubelet[2624]: E0213 15:11:07.537285 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.537297 kubelet[2624]: W0213 15:11:07.537295 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.537357 kubelet[2624]: E0213 15:11:07.537306 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.537466 kubelet[2624]: E0213 15:11:07.537455 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.537466 kubelet[2624]: W0213 15:11:07.537465 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.537520 kubelet[2624]: E0213 15:11:07.537477 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.537636 kubelet[2624]: E0213 15:11:07.537627 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.537694 kubelet[2624]: W0213 15:11:07.537636 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.537785 kubelet[2624]: E0213 15:11:07.537753 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.537816 kubelet[2624]: E0213 15:11:07.537809 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.537843 kubelet[2624]: W0213 15:11:07.537817 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.537911 kubelet[2624]: E0213 15:11:07.537894 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.538001 kubelet[2624]: E0213 15:11:07.537946 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.538001 kubelet[2624]: W0213 15:11:07.537952 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.538001 kubelet[2624]: E0213 15:11:07.537981 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.538093 kubelet[2624]: E0213 15:11:07.538079 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.538093 kubelet[2624]: W0213 15:11:07.538089 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.538161 kubelet[2624]: E0213 15:11:07.538150 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.538211 kubelet[2624]: E0213 15:11:07.538202 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.538211 kubelet[2624]: W0213 15:11:07.538209 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.538264 kubelet[2624]: E0213 15:11:07.538218 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.538374 kubelet[2624]: E0213 15:11:07.538364 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.538374 kubelet[2624]: W0213 15:11:07.538373 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.538426 kubelet[2624]: E0213 15:11:07.538385 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.538527 kubelet[2624]: E0213 15:11:07.538518 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.538527 kubelet[2624]: W0213 15:11:07.538526 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.538582 kubelet[2624]: E0213 15:11:07.538536 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.603207 kubelet[2624]: E0213 15:11:07.603165 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:07.603836 containerd[1475]: time="2025-02-13T15:11:07.603785111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bplkh,Uid:6e64515d-6ea9-4651-b9ad-fc36ab2cd484,Namespace:calico-system,Attempt:0,}" Feb 13 15:11:07.623088 containerd[1475]: time="2025-02-13T15:11:07.622983918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:07.623088 containerd[1475]: time="2025-02-13T15:11:07.623043884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:07.623088 containerd[1475]: time="2025-02-13T15:11:07.623054365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:07.623359 containerd[1475]: time="2025-02-13T15:11:07.623130451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:07.637520 kubelet[2624]: E0213 15:11:07.637497 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.637667 kubelet[2624]: W0213 15:11:07.637631 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.637894 kubelet[2624]: E0213 15:11:07.637751 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.638038 kubelet[2624]: E0213 15:11:07.638024 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.638109 kubelet[2624]: W0213 15:11:07.638096 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.638171 kubelet[2624]: E0213 15:11:07.638162 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.638404 kubelet[2624]: E0213 15:11:07.638390 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.638579 kubelet[2624]: W0213 15:11:07.638468 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.638579 kubelet[2624]: E0213 15:11:07.638495 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.638760 kubelet[2624]: E0213 15:11:07.638746 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.638832 kubelet[2624]: W0213 15:11:07.638813 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.638899 kubelet[2624]: E0213 15:11:07.638890 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.639308 kubelet[2624]: E0213 15:11:07.639162 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.639308 kubelet[2624]: W0213 15:11:07.639174 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.639308 kubelet[2624]: E0213 15:11:07.639190 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.639497 kubelet[2624]: E0213 15:11:07.639483 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.639561 kubelet[2624]: W0213 15:11:07.639550 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.639732 kubelet[2624]: E0213 15:11:07.639616 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.640049 kubelet[2624]: E0213 15:11:07.640033 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.640136 kubelet[2624]: W0213 15:11:07.640112 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.640200 kubelet[2624]: E0213 15:11:07.640177 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.640550 kubelet[2624]: E0213 15:11:07.640447 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.640550 kubelet[2624]: W0213 15:11:07.640459 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.640550 kubelet[2624]: E0213 15:11:07.640476 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.640754 kubelet[2624]: E0213 15:11:07.640741 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.640830 kubelet[2624]: W0213 15:11:07.640811 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.640895 kubelet[2624]: E0213 15:11:07.640886 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.641142 kubelet[2624]: E0213 15:11:07.641123 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.641142 kubelet[2624]: W0213 15:11:07.641140 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.641210 kubelet[2624]: E0213 15:11:07.641160 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.641338 kubelet[2624]: E0213 15:11:07.641326 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.641338 kubelet[2624]: W0213 15:11:07.641337 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.641394 kubelet[2624]: E0213 15:11:07.641348 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.641539 kubelet[2624]: E0213 15:11:07.641528 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.641539 kubelet[2624]: W0213 15:11:07.641538 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.641638 kubelet[2624]: E0213 15:11:07.641611 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.641715 kubelet[2624]: E0213 15:11:07.641683 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.641715 kubelet[2624]: W0213 15:11:07.641697 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.641787 kubelet[2624]: E0213 15:11:07.641767 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.641835 kubelet[2624]: E0213 15:11:07.641829 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.641874 kubelet[2624]: W0213 15:11:07.641835 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.641874 kubelet[2624]: E0213 15:11:07.641850 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.642011 kubelet[2624]: E0213 15:11:07.642001 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.642011 kubelet[2624]: W0213 15:11:07.642011 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.642068 kubelet[2624]: E0213 15:11:07.642024 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.642423 kubelet[2624]: E0213 15:11:07.642325 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.642423 kubelet[2624]: W0213 15:11:07.642339 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.642423 kubelet[2624]: E0213 15:11:07.642353 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.642614 kubelet[2624]: E0213 15:11:07.642584 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.642754 kubelet[2624]: W0213 15:11:07.642684 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.642754 kubelet[2624]: E0213 15:11:07.642714 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.643124 kubelet[2624]: E0213 15:11:07.643074 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.643124 kubelet[2624]: W0213 15:11:07.643087 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.643348 kubelet[2624]: E0213 15:11:07.643277 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.643489 systemd[1]: Started cri-containerd-5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a.scope - libcontainer container 5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a. Feb 13 15:11:07.644382 kubelet[2624]: E0213 15:11:07.644010 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.644382 kubelet[2624]: W0213 15:11:07.644027 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.644382 kubelet[2624]: E0213 15:11:07.644065 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.644828 kubelet[2624]: E0213 15:11:07.644699 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.644828 kubelet[2624]: W0213 15:11:07.644718 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.644915 kubelet[2624]: E0213 15:11:07.644838 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.645024 kubelet[2624]: E0213 15:11:07.645010 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.645165 kubelet[2624]: W0213 15:11:07.645078 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.645165 kubelet[2624]: E0213 15:11:07.645107 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.645463 kubelet[2624]: E0213 15:11:07.645447 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.646395 kubelet[2624]: W0213 15:11:07.645611 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.646618 kubelet[2624]: E0213 15:11:07.646486 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.646784 kubelet[2624]: E0213 15:11:07.646768 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.646992 kubelet[2624]: W0213 15:11:07.646851 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.646992 kubelet[2624]: E0213 15:11:07.646874 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.647137 kubelet[2624]: E0213 15:11:07.647125 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.647189 kubelet[2624]: W0213 15:11:07.647179 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.647248 kubelet[2624]: E0213 15:11:07.647240 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.647641 kubelet[2624]: E0213 15:11:07.647486 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.647641 kubelet[2624]: W0213 15:11:07.647499 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.647641 kubelet[2624]: E0213 15:11:07.647511 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.652041 kubelet[2624]: E0213 15:11:07.651975 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:07.652041 kubelet[2624]: W0213 15:11:07.651993 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:07.652041 kubelet[2624]: E0213 15:11:07.652012 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:07.668033 containerd[1475]: time="2025-02-13T15:11:07.667800659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bplkh,Uid:6e64515d-6ea9-4651-b9ad-fc36ab2cd484,Namespace:calico-system,Attempt:0,} returns sandbox id \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\"" Feb 13 15:11:07.670707 kubelet[2624]: E0213 15:11:07.669256 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:08.709736 kubelet[2624]: E0213 15:11:08.709387 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:09.595815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145122209.mount: Deactivated successfully. Feb 13 15:11:09.883671 containerd[1475]: time="2025-02-13T15:11:09.883485183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:09.884066 containerd[1475]: time="2025-02-13T15:11:09.883884775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:11:09.884981 containerd[1475]: time="2025-02-13T15:11:09.884942500Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:09.886731 containerd[1475]: time="2025-02-13T15:11:09.886699241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:09.888384 containerd[1475]: time="2025-02-13T15:11:09.887984544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.358712385s" Feb 13 15:11:09.888384 containerd[1475]: time="2025-02-13T15:11:09.888020267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:11:09.888885 containerd[1475]: time="2025-02-13T15:11:09.888691801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:11:09.897048 containerd[1475]: time="2025-02-13T15:11:09.897011749Z" level=info msg="CreateContainer within sandbox \"4ebdd1e80a4186fab30678b8fce2aa6005d42e422ba8313bab15aaac7f3a40d0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:11:09.908021 containerd[1475]: time="2025-02-13T15:11:09.907976950Z" level=info msg="CreateContainer within sandbox \"4ebdd1e80a4186fab30678b8fce2aa6005d42e422ba8313bab15aaac7f3a40d0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6bf8de23be93398c46e018d75bee4722facadfcf7f441c42c208e6143702c27e\"" Feb 13 15:11:09.908735 containerd[1475]: time="2025-02-13T15:11:09.908699528Z" level=info msg="StartContainer for \"6bf8de23be93398c46e018d75bee4722facadfcf7f441c42c208e6143702c27e\"" Feb 13 15:11:09.934860 systemd[1]: Started cri-containerd-6bf8de23be93398c46e018d75bee4722facadfcf7f441c42c208e6143702c27e.scope - libcontainer container 6bf8de23be93398c46e018d75bee4722facadfcf7f441c42c208e6143702c27e. Feb 13 15:11:09.968055 containerd[1475]: time="2025-02-13T15:11:09.967494610Z" level=info msg="StartContainer for \"6bf8de23be93398c46e018d75bee4722facadfcf7f441c42c208e6143702c27e\" returns successfully" Feb 13 15:11:10.709588 kubelet[2624]: E0213 15:11:10.709252 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:10.770947 kubelet[2624]: E0213 15:11:10.770916 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:10.782155 kubelet[2624]: I0213 15:11:10.782080 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5d67c7f966-db2ng" podStartSLOduration=1.422695611 podStartE2EDuration="3.782040689s" podCreationTimestamp="2025-02-13 15:11:07 +0000 UTC" firstStartedPulling="2025-02-13 15:11:07.529015977 +0000 UTC m=+20.912415648" lastFinishedPulling="2025-02-13 15:11:09.888361095 +0000 UTC m=+23.271760726" observedRunningTime="2025-02-13 15:11:10.781716064 +0000 UTC m=+24.165115735" watchObservedRunningTime="2025-02-13 15:11:10.782040689 +0000 UTC m=+24.165440360" Feb 13 15:11:10.858043 kubelet[2624]: E0213 15:11:10.858013 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.858043 kubelet[2624]: W0213 15:11:10.858037 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.858252 kubelet[2624]: E0213 15:11:10.858058 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.858252 kubelet[2624]: E0213 15:11:10.858231 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.858252 kubelet[2624]: W0213 15:11:10.858239 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.858252 kubelet[2624]: E0213 15:11:10.858253 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.858425 kubelet[2624]: E0213 15:11:10.858413 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.858452 kubelet[2624]: W0213 15:11:10.858426 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.858452 kubelet[2624]: E0213 15:11:10.858437 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.858683 kubelet[2624]: E0213 15:11:10.858641 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.858683 kubelet[2624]: W0213 15:11:10.858683 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.858748 kubelet[2624]: E0213 15:11:10.858726 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.858922 kubelet[2624]: E0213 15:11:10.858910 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.858958 kubelet[2624]: W0213 15:11:10.858922 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.858958 kubelet[2624]: E0213 15:11:10.858933 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.859097 kubelet[2624]: E0213 15:11:10.859086 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.859097 kubelet[2624]: W0213 15:11:10.859096 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.859159 kubelet[2624]: E0213 15:11:10.859108 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.859253 kubelet[2624]: E0213 15:11:10.859243 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.859253 kubelet[2624]: W0213 15:11:10.859253 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.859339 kubelet[2624]: E0213 15:11:10.859263 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.859404 kubelet[2624]: E0213 15:11:10.859392 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.859404 kubelet[2624]: W0213 15:11:10.859402 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.859618 kubelet[2624]: E0213 15:11:10.859412 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.859618 kubelet[2624]: E0213 15:11:10.859582 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.859618 kubelet[2624]: W0213 15:11:10.859588 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.859618 kubelet[2624]: E0213 15:11:10.859599 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.859754 kubelet[2624]: E0213 15:11:10.859740 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.859754 kubelet[2624]: W0213 15:11:10.859748 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.859817 kubelet[2624]: E0213 15:11:10.859758 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.859919 kubelet[2624]: E0213 15:11:10.859908 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.859919 kubelet[2624]: W0213 15:11:10.859918 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.860003 kubelet[2624]: E0213 15:11:10.859929 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.860145 kubelet[2624]: E0213 15:11:10.860131 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.860177 kubelet[2624]: W0213 15:11:10.860145 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.860177 kubelet[2624]: E0213 15:11:10.860158 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.860482 kubelet[2624]: E0213 15:11:10.860463 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.860512 kubelet[2624]: W0213 15:11:10.860483 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.860512 kubelet[2624]: E0213 15:11:10.860497 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.860714 kubelet[2624]: E0213 15:11:10.860703 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.860748 kubelet[2624]: W0213 15:11:10.860714 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.860748 kubelet[2624]: E0213 15:11:10.860725 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.860865 kubelet[2624]: E0213 15:11:10.860856 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.860865 kubelet[2624]: W0213 15:11:10.860865 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.860916 kubelet[2624]: E0213 15:11:10.860874 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.864297 kubelet[2624]: E0213 15:11:10.864271 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.864297 kubelet[2624]: W0213 15:11:10.864289 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.864420 kubelet[2624]: E0213 15:11:10.864307 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.864661 kubelet[2624]: E0213 15:11:10.864535 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.864661 kubelet[2624]: W0213 15:11:10.864544 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.864661 kubelet[2624]: E0213 15:11:10.864560 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.864869 kubelet[2624]: E0213 15:11:10.864737 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.864869 kubelet[2624]: W0213 15:11:10.864744 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.864869 kubelet[2624]: E0213 15:11:10.864765 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.864989 kubelet[2624]: E0213 15:11:10.864974 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.864989 kubelet[2624]: W0213 15:11:10.864986 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.865046 kubelet[2624]: E0213 15:11:10.865001 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.865184 kubelet[2624]: E0213 15:11:10.865164 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.865184 kubelet[2624]: W0213 15:11:10.865177 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.865269 kubelet[2624]: E0213 15:11:10.865190 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.865343 kubelet[2624]: E0213 15:11:10.865332 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.865379 kubelet[2624]: W0213 15:11:10.865343 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.865379 kubelet[2624]: E0213 15:11:10.865367 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.865542 kubelet[2624]: E0213 15:11:10.865530 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.865542 kubelet[2624]: W0213 15:11:10.865540 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.865712 kubelet[2624]: E0213 15:11:10.865554 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.866262 kubelet[2624]: E0213 15:11:10.866220 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.866262 kubelet[2624]: W0213 15:11:10.866239 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.866262 kubelet[2624]: E0213 15:11:10.866260 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.866680 kubelet[2624]: E0213 15:11:10.866460 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.866680 kubelet[2624]: W0213 15:11:10.866471 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.866680 kubelet[2624]: E0213 15:11:10.866525 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.866680 kubelet[2624]: E0213 15:11:10.866656 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.866680 kubelet[2624]: W0213 15:11:10.866663 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.866680 kubelet[2624]: E0213 15:11:10.866680 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.866856 kubelet[2624]: E0213 15:11:10.866813 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.866856 kubelet[2624]: W0213 15:11:10.866820 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.866856 kubelet[2624]: E0213 15:11:10.866835 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.867004 kubelet[2624]: E0213 15:11:10.866990 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.867004 kubelet[2624]: W0213 15:11:10.867000 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.867055 kubelet[2624]: E0213 15:11:10.867014 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.867189 kubelet[2624]: E0213 15:11:10.867177 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.867189 kubelet[2624]: W0213 15:11:10.867188 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.867236 kubelet[2624]: E0213 15:11:10.867205 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.867441 kubelet[2624]: E0213 15:11:10.867426 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.867441 kubelet[2624]: W0213 15:11:10.867439 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.867502 kubelet[2624]: E0213 15:11:10.867457 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.867668 kubelet[2624]: E0213 15:11:10.867631 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.867668 kubelet[2624]: W0213 15:11:10.867641 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.867718 kubelet[2624]: E0213 15:11:10.867680 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.867957 kubelet[2624]: E0213 15:11:10.867862 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.867957 kubelet[2624]: W0213 15:11:10.867870 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.867957 kubelet[2624]: E0213 15:11:10.867886 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.868395 kubelet[2624]: E0213 15:11:10.868108 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.868395 kubelet[2624]: W0213 15:11:10.868119 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.868395 kubelet[2624]: E0213 15:11:10.868138 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:10.868395 kubelet[2624]: E0213 15:11:10.868353 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:11:10.868395 kubelet[2624]: W0213 15:11:10.868362 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:11:10.868395 kubelet[2624]: E0213 15:11:10.868374 2624 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:11:11.609618 containerd[1475]: time="2025-02-13T15:11:11.609560362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:11.610636 containerd[1475]: time="2025-02-13T15:11:11.610578517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:11:11.611575 containerd[1475]: time="2025-02-13T15:11:11.611540628Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:11.614886 containerd[1475]: time="2025-02-13T15:11:11.614852312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:11.616769 containerd[1475]: time="2025-02-13T15:11:11.616728650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.728001607s" Feb 13 15:11:11.616822 containerd[1475]: time="2025-02-13T15:11:11.616771733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:11:11.619596 containerd[1475]: time="2025-02-13T15:11:11.619551498Z" level=info msg="CreateContainer within sandbox \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:11:11.639328 containerd[1475]: time="2025-02-13T15:11:11.639278190Z" level=info msg="CreateContainer within sandbox \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd\"" Feb 13 15:11:11.639965 containerd[1475]: time="2025-02-13T15:11:11.639830871Z" level=info msg="StartContainer for \"a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd\"" Feb 13 15:11:11.672859 systemd[1]: Started cri-containerd-a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd.scope - libcontainer container a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd. Feb 13 15:11:11.702765 containerd[1475]: time="2025-02-13T15:11:11.702708460Z" level=info msg="StartContainer for \"a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd\" returns successfully" Feb 13 15:11:11.729420 systemd[1]: cri-containerd-a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd.scope: Deactivated successfully. Feb 13 15:11:11.773213 kubelet[2624]: E0213 15:11:11.773117 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:11.781441 kubelet[2624]: I0213 15:11:11.781385 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:11.782118 kubelet[2624]: E0213 15:11:11.782085 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:11.795064 containerd[1475]: time="2025-02-13T15:11:11.794931770Z" level=info msg="shim disconnected" id=a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd namespace=k8s.io Feb 13 15:11:11.795064 containerd[1475]: time="2025-02-13T15:11:11.794990735Z" level=warning msg="cleaning up after shim disconnected" id=a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd namespace=k8s.io Feb 13 15:11:11.795064 containerd[1475]: time="2025-02-13T15:11:11.795002136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:12.629525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67fa18644dda515f9cb5f8d4d3533c4d61fcd37d52b87519571b190b96ae9fd-rootfs.mount: Deactivated successfully. Feb 13 15:11:12.711205 kubelet[2624]: E0213 15:11:12.710178 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:12.776014 kubelet[2624]: E0213 15:11:12.775823 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:12.777026 containerd[1475]: time="2025-02-13T15:11:12.776986280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:11:14.710241 kubelet[2624]: E0213 15:11:14.710185 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:15.729337 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:34952.service - OpenSSH per-connection server daemon (10.0.0.1:34952). Feb 13 15:11:15.774687 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 34952 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:15.776091 sshd-session[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:15.781678 systemd-logind[1454]: New session 8 of user core. Feb 13 15:11:15.787825 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:11:15.912429 sshd[3343]: Connection closed by 10.0.0.1 port 34952 Feb 13 15:11:15.912810 sshd-session[3341]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:15.916402 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:34952.service: Deactivated successfully. Feb 13 15:11:15.917938 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:11:15.919330 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:11:15.920415 systemd-logind[1454]: Removed session 8. Feb 13 15:11:16.711889 kubelet[2624]: E0213 15:11:16.711858 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:17.982333 containerd[1475]: time="2025-02-13T15:11:17.982285461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:17.982765 containerd[1475]: time="2025-02-13T15:11:17.982614280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:11:17.984067 containerd[1475]: time="2025-02-13T15:11:17.984035843Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:17.985950 containerd[1475]: time="2025-02-13T15:11:17.985925033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:17.987344 containerd[1475]: time="2025-02-13T15:11:17.987311473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.21027123s" Feb 13 15:11:17.987392 containerd[1475]: time="2025-02-13T15:11:17.987342595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:11:17.990771 containerd[1475]: time="2025-02-13T15:11:17.990736432Z" level=info msg="CreateContainer within sandbox \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:11:18.009245 containerd[1475]: time="2025-02-13T15:11:18.009198966Z" level=info msg="CreateContainer within sandbox \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847\"" Feb 13 15:11:18.014694 containerd[1475]: time="2025-02-13T15:11:18.014261649Z" level=info msg="StartContainer for \"7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847\"" Feb 13 15:11:18.046840 systemd[1]: Started cri-containerd-7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847.scope - libcontainer container 7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847. Feb 13 15:11:18.098947 containerd[1475]: time="2025-02-13T15:11:18.098899828Z" level=info msg="StartContainer for \"7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847\" returns successfully" Feb 13 15:11:18.605068 systemd[1]: cri-containerd-7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847.scope: Deactivated successfully. Feb 13 15:11:18.619237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847-rootfs.mount: Deactivated successfully. Feb 13 15:11:18.636325 containerd[1475]: time="2025-02-13T15:11:18.636265992Z" level=info msg="shim disconnected" id=7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847 namespace=k8s.io Feb 13 15:11:18.636325 containerd[1475]: time="2025-02-13T15:11:18.636322435Z" level=warning msg="cleaning up after shim disconnected" id=7619e2d0bbb465ec42f27fae29e6cfb67a88690ad0538f5fd89010798e07c847 namespace=k8s.io Feb 13 15:11:18.636325 containerd[1475]: time="2025-02-13T15:11:18.636330836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:18.639791 kubelet[2624]: I0213 15:11:18.639759 2624 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:11:18.665417 kubelet[2624]: I0213 15:11:18.665371 2624 topology_manager.go:215] "Topology Admit Handler" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" podNamespace="kube-system" podName="coredns-76f75df574-497kt" Feb 13 15:11:18.667755 kubelet[2624]: I0213 15:11:18.666137 2624 topology_manager.go:215] "Topology Admit Handler" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" podNamespace="calico-apiserver" podName="calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:18.669200 kubelet[2624]: I0213 15:11:18.668209 2624 topology_manager.go:215] "Topology Admit Handler" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" podNamespace="calico-system" podName="calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:18.669200 kubelet[2624]: I0213 15:11:18.668369 2624 topology_manager.go:215] "Topology Admit Handler" podUID="669d688c-25ab-473d-9d28-45c8a124548b" podNamespace="calico-apiserver" podName="calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:18.672831 kubelet[2624]: I0213 15:11:18.672805 2624 topology_manager.go:215] "Topology Admit Handler" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" podNamespace="kube-system" podName="coredns-76f75df574-72d96" Feb 13 15:11:18.677616 systemd[1]: Created slice kubepods-burstable-podb1324758_fb3a_44a6_944b_64a2fbd93ce8.slice - libcontainer container kubepods-burstable-podb1324758_fb3a_44a6_944b_64a2fbd93ce8.slice. Feb 13 15:11:18.685700 systemd[1]: Created slice kubepods-besteffort-pod3354d09c_c5d1_4b08_92f8_0175175a9438.slice - libcontainer container kubepods-besteffort-pod3354d09c_c5d1_4b08_92f8_0175175a9438.slice. Feb 13 15:11:18.692241 systemd[1]: Created slice kubepods-besteffort-pod669d688c_25ab_473d_9d28_45c8a124548b.slice - libcontainer container kubepods-besteffort-pod669d688c_25ab_473d_9d28_45c8a124548b.slice. Feb 13 15:11:18.697062 systemd[1]: Created slice kubepods-besteffort-podffa4f9e1_7dbe_408b_9c49_96a8006df152.slice - libcontainer container kubepods-besteffort-podffa4f9e1_7dbe_408b_9c49_96a8006df152.slice. Feb 13 15:11:18.703279 systemd[1]: Created slice kubepods-burstable-pod9a547d44_0314_4436_850e_6c8fdf4e6cfd.slice - libcontainer container kubepods-burstable-pod9a547d44_0314_4436_850e_6c8fdf4e6cfd.slice. Feb 13 15:11:18.722909 systemd[1]: Created slice kubepods-besteffort-poda5494d8d_0818_4dbe_926f_03408aa43bf9.slice - libcontainer container kubepods-besteffort-poda5494d8d_0818_4dbe_926f_03408aa43bf9.slice. Feb 13 15:11:18.725062 containerd[1475]: time="2025-02-13T15:11:18.724925516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:0,}" Feb 13 15:11:18.788374 kubelet[2624]: E0213 15:11:18.788327 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:18.789978 containerd[1475]: time="2025-02-13T15:11:18.789948956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:11:18.827326 kubelet[2624]: I0213 15:11:18.827277 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgdz\" (UniqueName: \"kubernetes.io/projected/ffa4f9e1-7dbe-408b-9c49-96a8006df152-kube-api-access-5jgdz\") pod \"calico-kube-controllers-c4c875978-b5v57\" (UID: \"ffa4f9e1-7dbe-408b-9c49-96a8006df152\") " pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:18.827326 kubelet[2624]: I0213 15:11:18.827330 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48p7\" (UniqueName: \"kubernetes.io/projected/b1324758-fb3a-44a6-944b-64a2fbd93ce8-kube-api-access-h48p7\") pod \"coredns-76f75df574-497kt\" (UID: \"b1324758-fb3a-44a6-944b-64a2fbd93ce8\") " pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:18.827494 kubelet[2624]: I0213 15:11:18.827355 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3354d09c-c5d1-4b08-92f8-0175175a9438-calico-apiserver-certs\") pod \"calico-apiserver-54bd4f4757-r8prd\" (UID: \"3354d09c-c5d1-4b08-92f8-0175175a9438\") " pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:18.827494 kubelet[2624]: I0213 15:11:18.827379 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgvj\" (UniqueName: \"kubernetes.io/projected/9a547d44-0314-4436-850e-6c8fdf4e6cfd-kube-api-access-stgvj\") pod \"coredns-76f75df574-72d96\" (UID: \"9a547d44-0314-4436-850e-6c8fdf4e6cfd\") " pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:18.827494 kubelet[2624]: I0213 15:11:18.827401 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a547d44-0314-4436-850e-6c8fdf4e6cfd-config-volume\") pod \"coredns-76f75df574-72d96\" (UID: \"9a547d44-0314-4436-850e-6c8fdf4e6cfd\") " pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:18.827494 kubelet[2624]: I0213 15:11:18.827438 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/669d688c-25ab-473d-9d28-45c8a124548b-calico-apiserver-certs\") pod \"calico-apiserver-54bd4f4757-bzs76\" (UID: \"669d688c-25ab-473d-9d28-45c8a124548b\") " pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:18.827494 kubelet[2624]: I0213 15:11:18.827471 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzt9\" (UniqueName: \"kubernetes.io/projected/669d688c-25ab-473d-9d28-45c8a124548b-kube-api-access-5nzt9\") pod \"calico-apiserver-54bd4f4757-bzs76\" (UID: \"669d688c-25ab-473d-9d28-45c8a124548b\") " pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:18.827620 kubelet[2624]: I0213 15:11:18.827491 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8htc\" (UniqueName: \"kubernetes.io/projected/3354d09c-c5d1-4b08-92f8-0175175a9438-kube-api-access-m8htc\") pod \"calico-apiserver-54bd4f4757-r8prd\" (UID: \"3354d09c-c5d1-4b08-92f8-0175175a9438\") " pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:18.827620 kubelet[2624]: I0213 15:11:18.827514 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1324758-fb3a-44a6-944b-64a2fbd93ce8-config-volume\") pod \"coredns-76f75df574-497kt\" (UID: \"b1324758-fb3a-44a6-944b-64a2fbd93ce8\") " pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:18.827620 kubelet[2624]: I0213 15:11:18.827539 2624 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa4f9e1-7dbe-408b-9c49-96a8006df152-tigera-ca-bundle\") pod \"calico-kube-controllers-c4c875978-b5v57\" (UID: \"ffa4f9e1-7dbe-408b-9c49-96a8006df152\") " pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:18.893855 containerd[1475]: time="2025-02-13T15:11:18.893800530Z" level=error msg="Failed to destroy network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:18.900470 containerd[1475]: time="2025-02-13T15:11:18.899673819Z" level=error msg="encountered an error cleaning up failed sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:18.900470 containerd[1475]: time="2025-02-13T15:11:18.899800586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:18.902590 kubelet[2624]: E0213 15:11:18.902548 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:18.902679 kubelet[2624]: E0213 15:11:18.902625 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:18.902679 kubelet[2624]: E0213 15:11:18.902661 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:18.902750 kubelet[2624]: E0213 15:11:18.902736 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:18.982853 kubelet[2624]: E0213 15:11:18.982808 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:18.983638 containerd[1475]: time="2025-02-13T15:11:18.983336223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:18.989805 containerd[1475]: time="2025-02-13T15:11:18.989618814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:11:18.995487 containerd[1475]: time="2025-02-13T15:11:18.995447461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:11:19.002465 containerd[1475]: time="2025-02-13T15:11:19.002424730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:0,}" Feb 13 15:11:19.005952 kubelet[2624]: E0213 15:11:19.005926 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:19.006481 containerd[1475]: time="2025-02-13T15:11:19.006440947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:19.080278 containerd[1475]: time="2025-02-13T15:11:19.079720588Z" level=error msg="Failed to destroy network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.080278 containerd[1475]: time="2025-02-13T15:11:19.080031205Z" level=error msg="encountered an error cleaning up failed sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.080278 containerd[1475]: time="2025-02-13T15:11:19.080089448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.080720 kubelet[2624]: E0213 15:11:19.080688 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.080787 kubelet[2624]: E0213 15:11:19.080745 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:19.080787 kubelet[2624]: E0213 15:11:19.080770 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:19.080895 kubelet[2624]: E0213 15:11:19.080847 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" Feb 13 15:11:19.117058 containerd[1475]: time="2025-02-13T15:11:19.116999843Z" level=error msg="Failed to destroy network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.117352 containerd[1475]: time="2025-02-13T15:11:19.117318700Z" level=error msg="encountered an error cleaning up failed sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.117399 containerd[1475]: time="2025-02-13T15:11:19.117381184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.117724 kubelet[2624]: E0213 15:11:19.117689 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.117809 kubelet[2624]: E0213 15:11:19.117749 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:19.117809 kubelet[2624]: E0213 15:11:19.117770 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:19.117901 kubelet[2624]: E0213 15:11:19.117825 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podUID="669d688c-25ab-473d-9d28-45c8a124548b" Feb 13 15:11:19.126609 containerd[1475]: time="2025-02-13T15:11:19.126545279Z" level=error msg="Failed to destroy network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.127792 containerd[1475]: time="2025-02-13T15:11:19.127748384Z" level=error msg="encountered an error cleaning up failed sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.127850 containerd[1475]: time="2025-02-13T15:11:19.127820508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.128087 kubelet[2624]: E0213 15:11:19.128046 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.128139 kubelet[2624]: E0213 15:11:19.128106 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:19.128139 kubelet[2624]: E0213 15:11:19.128127 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:19.128188 kubelet[2624]: E0213 15:11:19.128177 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-497kt" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" Feb 13 15:11:19.140707 containerd[1475]: time="2025-02-13T15:11:19.140634441Z" level=error msg="Failed to destroy network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.141565 containerd[1475]: time="2025-02-13T15:11:19.141524169Z" level=error msg="encountered an error cleaning up failed sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.141625 containerd[1475]: time="2025-02-13T15:11:19.141605373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.141886 kubelet[2624]: E0213 15:11:19.141854 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.141948 kubelet[2624]: E0213 15:11:19.141913 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:19.141948 kubelet[2624]: E0213 15:11:19.141935 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:19.142089 kubelet[2624]: E0213 15:11:19.141989 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-72d96" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" Feb 13 15:11:19.145621 containerd[1475]: time="2025-02-13T15:11:19.145506624Z" level=error msg="Failed to destroy network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.145857 containerd[1475]: time="2025-02-13T15:11:19.145828801Z" level=error msg="encountered an error cleaning up failed sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.145908 containerd[1475]: time="2025-02-13T15:11:19.145888965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.146076 kubelet[2624]: E0213 15:11:19.146057 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.146111 kubelet[2624]: E0213 15:11:19.146105 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:19.146132 kubelet[2624]: E0213 15:11:19.146127 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:19.146181 kubelet[2624]: E0213 15:11:19.146170 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" Feb 13 15:11:19.790603 kubelet[2624]: I0213 15:11:19.790508 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5" Feb 13 15:11:19.792035 containerd[1475]: time="2025-02-13T15:11:19.791886563Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:19.792743 containerd[1475]: time="2025-02-13T15:11:19.792137657Z" level=info msg="Ensure that sandbox 273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5 in task-service has been cleanup successfully" Feb 13 15:11:19.792743 containerd[1475]: time="2025-02-13T15:11:19.792705648Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:19.792743 containerd[1475]: time="2025-02-13T15:11:19.792728289Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:19.793373 kubelet[2624]: I0213 15:11:19.793356 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b" Feb 13 15:11:19.793459 containerd[1475]: time="2025-02-13T15:11:19.793437367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:11:19.793990 containerd[1475]: time="2025-02-13T15:11:19.793736703Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:19.793990 containerd[1475]: time="2025-02-13T15:11:19.793878391Z" level=info msg="Ensure that sandbox b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b in task-service has been cleanup successfully" Feb 13 15:11:19.794139 containerd[1475]: time="2025-02-13T15:11:19.794120964Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:19.794198 containerd[1475]: time="2025-02-13T15:11:19.794186448Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:19.794415 kubelet[2624]: E0213 15:11:19.794396 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:19.794902 containerd[1475]: time="2025-02-13T15:11:19.794639352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:1,}" Feb 13 15:11:19.795226 kubelet[2624]: I0213 15:11:19.795208 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226" Feb 13 15:11:19.795882 containerd[1475]: time="2025-02-13T15:11:19.795670088Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:19.795882 containerd[1475]: time="2025-02-13T15:11:19.795800095Z" level=info msg="Ensure that sandbox 46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226 in task-service has been cleanup successfully" Feb 13 15:11:19.796169 containerd[1475]: time="2025-02-13T15:11:19.796142153Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:19.796231 containerd[1475]: time="2025-02-13T15:11:19.796159314Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:19.796602 containerd[1475]: time="2025-02-13T15:11:19.796572697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:1,}" Feb 13 15:11:19.797239 kubelet[2624]: I0213 15:11:19.797217 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab" Feb 13 15:11:19.797732 containerd[1475]: time="2025-02-13T15:11:19.797676436Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:19.797886 containerd[1475]: time="2025-02-13T15:11:19.797818804Z" level=info msg="Ensure that sandbox 49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab in task-service has been cleanup successfully" Feb 13 15:11:19.798106 containerd[1475]: time="2025-02-13T15:11:19.798087259Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:19.798106 containerd[1475]: time="2025-02-13T15:11:19.798104859Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:19.800835 containerd[1475]: time="2025-02-13T15:11:19.800502069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:11:19.801791 kubelet[2624]: I0213 15:11:19.801770 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c" Feb 13 15:11:19.803590 containerd[1475]: time="2025-02-13T15:11:19.803550674Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:19.804435 containerd[1475]: time="2025-02-13T15:11:19.804004178Z" level=info msg="Ensure that sandbox ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c in task-service has been cleanup successfully" Feb 13 15:11:19.804435 containerd[1475]: time="2025-02-13T15:11:19.804179948Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:19.804435 containerd[1475]: time="2025-02-13T15:11:19.804210230Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:19.804924 containerd[1475]: time="2025-02-13T15:11:19.804893266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:1,}" Feb 13 15:11:19.805350 kubelet[2624]: I0213 15:11:19.805314 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578" Feb 13 15:11:19.805849 containerd[1475]: time="2025-02-13T15:11:19.805820877Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:19.806348 containerd[1475]: time="2025-02-13T15:11:19.806321904Z" level=info msg="Ensure that sandbox 94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578 in task-service has been cleanup successfully" Feb 13 15:11:19.806609 containerd[1475]: time="2025-02-13T15:11:19.806588918Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:19.806763 containerd[1475]: time="2025-02-13T15:11:19.806689684Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:19.807375 kubelet[2624]: E0213 15:11:19.807334 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:19.807718 containerd[1475]: time="2025-02-13T15:11:19.807631374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:1,}" Feb 13 15:11:19.876672 containerd[1475]: time="2025-02-13T15:11:19.876518258Z" level=error msg="Failed to destroy network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.878782 containerd[1475]: time="2025-02-13T15:11:19.878723537Z" level=error msg="encountered an error cleaning up failed sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.879101 containerd[1475]: time="2025-02-13T15:11:19.878873225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.879215 kubelet[2624]: E0213 15:11:19.879191 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.879257 kubelet[2624]: E0213 15:11:19.879249 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:19.879280 kubelet[2624]: E0213 15:11:19.879269 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:19.879350 kubelet[2624]: E0213 15:11:19.879327 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" Feb 13 15:11:19.895854 containerd[1475]: time="2025-02-13T15:11:19.895776299Z" level=error msg="Failed to destroy network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.896328 containerd[1475]: time="2025-02-13T15:11:19.896297967Z" level=error msg="encountered an error cleaning up failed sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.896443 containerd[1475]: time="2025-02-13T15:11:19.896423974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.896768 kubelet[2624]: E0213 15:11:19.896741 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.896847 kubelet[2624]: E0213 15:11:19.896815 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:19.896847 kubelet[2624]: E0213 15:11:19.896838 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:19.896942 kubelet[2624]: E0213 15:11:19.896896 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-72d96" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" Feb 13 15:11:19.940623 containerd[1475]: time="2025-02-13T15:11:19.940553319Z" level=error msg="Failed to destroy network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.944676 containerd[1475]: time="2025-02-13T15:11:19.942859404Z" level=error msg="encountered an error cleaning up failed sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.944676 containerd[1475]: time="2025-02-13T15:11:19.944149194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.944829 kubelet[2624]: E0213 15:11:19.944769 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.946742 kubelet[2624]: E0213 15:11:19.944876 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:19.946742 kubelet[2624]: E0213 15:11:19.944928 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:19.946742 kubelet[2624]: E0213 15:11:19.945008 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:19.949682 containerd[1475]: time="2025-02-13T15:11:19.947361407Z" level=error msg="Failed to destroy network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.950768 containerd[1475]: time="2025-02-13T15:11:19.950736430Z" level=error msg="encountered an error cleaning up failed sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.950915 containerd[1475]: time="2025-02-13T15:11:19.950895518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.953032 kubelet[2624]: E0213 15:11:19.952992 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.953247 kubelet[2624]: E0213 15:11:19.953154 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:19.953247 kubelet[2624]: E0213 15:11:19.953180 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:19.953851 kubelet[2624]: E0213 15:11:19.953415 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" Feb 13 15:11:19.964158 containerd[1475]: time="2025-02-13T15:11:19.963610606Z" level=error msg="Failed to destroy network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.964158 containerd[1475]: time="2025-02-13T15:11:19.964018508Z" level=error msg="encountered an error cleaning up failed sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.964158 containerd[1475]: time="2025-02-13T15:11:19.964086431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.964328 kubelet[2624]: E0213 15:11:19.964314 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.964362 kubelet[2624]: E0213 15:11:19.964357 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:19.964387 kubelet[2624]: E0213 15:11:19.964377 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:19.965045 kubelet[2624]: E0213 15:11:19.964451 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-497kt" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" Feb 13 15:11:19.966167 containerd[1475]: time="2025-02-13T15:11:19.966038777Z" level=error msg="Failed to destroy network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.966442 containerd[1475]: time="2025-02-13T15:11:19.966415917Z" level=error msg="encountered an error cleaning up failed sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.966559 containerd[1475]: time="2025-02-13T15:11:19.966537364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.966850 kubelet[2624]: E0213 15:11:19.966815 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:19.966910 kubelet[2624]: E0213 15:11:19.966858 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:19.966910 kubelet[2624]: E0213 15:11:19.966876 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:19.966960 kubelet[2624]: E0213 15:11:19.966915 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podUID="669d688c-25ab-473d-9d28-45c8a124548b" Feb 13 15:11:20.008051 systemd[1]: run-netns-cni\x2d76273936\x2d0eb5\x2deec5\x2d972c\x2d007c45982cd5.mount: Deactivated successfully. Feb 13 15:11:20.008149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab-shm.mount: Deactivated successfully. Feb 13 15:11:20.008206 systemd[1]: run-netns-cni\x2da5db7ee1\x2d240a\x2d53ed\x2d20d6\x2deef4ef85d389.mount: Deactivated successfully. Feb 13 15:11:20.008255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578-shm.mount: Deactivated successfully. Feb 13 15:11:20.008311 systemd[1]: run-netns-cni\x2d691e881e\x2d408f\x2d930b\x2df1c3\x2d17ce200a0df6.mount: Deactivated successfully. Feb 13 15:11:20.008356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5-shm.mount: Deactivated successfully. Feb 13 15:11:20.008408 systemd[1]: run-netns-cni\x2d5f685190\x2d8b37\x2dbfc1\x2d8964\x2db98a60e44a9b.mount: Deactivated successfully. Feb 13 15:11:20.810895 kubelet[2624]: I0213 15:11:20.810857 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d" Feb 13 15:11:20.811671 containerd[1475]: time="2025-02-13T15:11:20.811609136Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:20.813008 containerd[1475]: time="2025-02-13T15:11:20.811793946Z" level=info msg="Ensure that sandbox f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d in task-service has been cleanup successfully" Feb 13 15:11:20.813574 systemd[1]: run-netns-cni\x2de2290679\x2dc99d\x2d2569\x2daeab\x2d0528ef3d8719.mount: Deactivated successfully. Feb 13 15:11:20.814358 kubelet[2624]: I0213 15:11:20.814082 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72" Feb 13 15:11:20.815146 containerd[1475]: time="2025-02-13T15:11:20.815116879Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:20.815253 containerd[1475]: time="2025-02-13T15:11:20.815227965Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:20.815503 containerd[1475]: time="2025-02-13T15:11:20.815450177Z" level=info msg="Ensure that sandbox 884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72 in task-service has been cleanup successfully" Feb 13 15:11:20.815833 containerd[1475]: time="2025-02-13T15:11:20.815640107Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:20.816084 containerd[1475]: time="2025-02-13T15:11:20.815997405Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:20.816084 containerd[1475]: time="2025-02-13T15:11:20.816071209Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:20.816337 containerd[1475]: time="2025-02-13T15:11:20.816203496Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:20.816472 containerd[1475]: time="2025-02-13T15:11:20.816442629Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:20.816620 containerd[1475]: time="2025-02-13T15:11:20.816599597Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:20.816620 containerd[1475]: time="2025-02-13T15:11:20.816619078Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:20.816882 containerd[1475]: time="2025-02-13T15:11:20.816846610Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:20.817044 kubelet[2624]: I0213 15:11:20.817003 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9" Feb 13 15:11:20.817088 kubelet[2624]: E0213 15:11:20.817033 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:20.817223 containerd[1475]: time="2025-02-13T15:11:20.817196908Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:20.817550 kubelet[2624]: E0213 15:11:20.817525 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:20.817655 containerd[1475]: time="2025-02-13T15:11:20.817599329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:2,}" Feb 13 15:11:20.817903 containerd[1475]: time="2025-02-13T15:11:20.817875263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:2,}" Feb 13 15:11:20.819101 containerd[1475]: time="2025-02-13T15:11:20.819065726Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:20.819231 containerd[1475]: time="2025-02-13T15:11:20.819204693Z" level=info msg="Ensure that sandbox cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9 in task-service has been cleanup successfully" Feb 13 15:11:20.820742 containerd[1475]: time="2025-02-13T15:11:20.819909810Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:20.820742 containerd[1475]: time="2025-02-13T15:11:20.819935131Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:20.823618 containerd[1475]: time="2025-02-13T15:11:20.821162875Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:20.823618 containerd[1475]: time="2025-02-13T15:11:20.821368886Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:20.823618 containerd[1475]: time="2025-02-13T15:11:20.821418929Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:20.821536 systemd[1]: run-netns-cni\x2d4e4c85b1\x2d8dd5\x2d54fd\x2daed3\x2d3942389e32b8.mount: Deactivated successfully. Feb 13 15:11:20.824832 containerd[1475]: time="2025-02-13T15:11:20.824782224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:2,}" Feb 13 15:11:20.825918 kubelet[2624]: I0213 15:11:20.825843 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534" Feb 13 15:11:20.826410 containerd[1475]: time="2025-02-13T15:11:20.826326305Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:20.826361 systemd[1]: run-netns-cni\x2d1c553e20\x2dd2e2\x2d5a50\x2de99f\x2dad6c46634a21.mount: Deactivated successfully. Feb 13 15:11:20.826864 containerd[1475]: time="2025-02-13T15:11:20.826779729Z" level=info msg="Ensure that sandbox f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534 in task-service has been cleanup successfully" Feb 13 15:11:20.827384 containerd[1475]: time="2025-02-13T15:11:20.827351799Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:20.827384 containerd[1475]: time="2025-02-13T15:11:20.827377240Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:20.828122 containerd[1475]: time="2025-02-13T15:11:20.828094477Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:20.828319 containerd[1475]: time="2025-02-13T15:11:20.828244445Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:20.828319 containerd[1475]: time="2025-02-13T15:11:20.828260566Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:20.828682 kubelet[2624]: I0213 15:11:20.828634 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079" Feb 13 15:11:20.828968 containerd[1475]: time="2025-02-13T15:11:20.828763712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:11:20.829177 containerd[1475]: time="2025-02-13T15:11:20.829154813Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:20.829450 containerd[1475]: time="2025-02-13T15:11:20.829411386Z" level=info msg="Ensure that sandbox b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079 in task-service has been cleanup successfully" Feb 13 15:11:20.830062 systemd[1]: run-netns-cni\x2d0a1e1d84\x2d2539\x2d7426\x2d2a13\x2d1288183b73fa.mount: Deactivated successfully. Feb 13 15:11:20.831765 containerd[1475]: time="2025-02-13T15:11:20.831677825Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:20.831765 containerd[1475]: time="2025-02-13T15:11:20.831703306Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:20.832108 containerd[1475]: time="2025-02-13T15:11:20.832076765Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:20.833147 containerd[1475]: time="2025-02-13T15:11:20.832229453Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:20.833147 containerd[1475]: time="2025-02-13T15:11:20.832247814Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:20.833147 containerd[1475]: time="2025-02-13T15:11:20.832959011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:2,}" Feb 13 15:11:20.834778 kubelet[2624]: I0213 15:11:20.834752 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4" Feb 13 15:11:20.836519 containerd[1475]: time="2025-02-13T15:11:20.836482436Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:20.836739 containerd[1475]: time="2025-02-13T15:11:20.836713328Z" level=info msg="Ensure that sandbox e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4 in task-service has been cleanup successfully" Feb 13 15:11:20.836911 containerd[1475]: time="2025-02-13T15:11:20.836894017Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:20.836911 containerd[1475]: time="2025-02-13T15:11:20.836910938Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:20.837782 containerd[1475]: time="2025-02-13T15:11:20.837752742Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:20.837856 containerd[1475]: time="2025-02-13T15:11:20.837842387Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:20.837856 containerd[1475]: time="2025-02-13T15:11:20.837853987Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:20.838930 containerd[1475]: time="2025-02-13T15:11:20.838816357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:11:20.930824 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:34964.service - OpenSSH per-connection server daemon (10.0.0.1:34964). Feb 13 15:11:21.007827 containerd[1475]: time="2025-02-13T15:11:21.007762094Z" level=error msg="Failed to destroy network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.013922 systemd[1]: run-netns-cni\x2d12bdc402\x2de9cb\x2d0399\x2d86ec\x2d4778b3351dd7.mount: Deactivated successfully. Feb 13 15:11:21.014013 systemd[1]: run-netns-cni\x2dee48fb40\x2d19ae\x2df402\x2dfb10\x2dd1922ad51e04.mount: Deactivated successfully. Feb 13 15:11:21.016864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b-shm.mount: Deactivated successfully. Feb 13 15:11:21.024900 containerd[1475]: time="2025-02-13T15:11:21.024838117Z" level=error msg="encountered an error cleaning up failed sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.025045 containerd[1475]: time="2025-02-13T15:11:21.024940682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.025279 kubelet[2624]: E0213 15:11:21.025255 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.025416 kubelet[2624]: E0213 15:11:21.025312 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:21.025416 kubelet[2624]: E0213 15:11:21.025333 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:21.026473 kubelet[2624]: E0213 15:11:21.025541 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-72d96" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" Feb 13 15:11:21.032615 containerd[1475]: time="2025-02-13T15:11:21.032544066Z" level=error msg="Failed to destroy network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.034206 containerd[1475]: time="2025-02-13T15:11:21.034061423Z" level=error msg="encountered an error cleaning up failed sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.034206 containerd[1475]: time="2025-02-13T15:11:21.034154508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.034549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757-shm.mount: Deactivated successfully. Feb 13 15:11:21.034780 kubelet[2624]: E0213 15:11:21.034536 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.034780 kubelet[2624]: E0213 15:11:21.034729 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:21.034996 kubelet[2624]: E0213 15:11:21.034874 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:21.035145 kubelet[2624]: E0213 15:11:21.034937 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-497kt" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" Feb 13 15:11:21.049295 containerd[1475]: time="2025-02-13T15:11:21.049099303Z" level=error msg="Failed to destroy network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.051566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788-shm.mount: Deactivated successfully. Feb 13 15:11:21.053046 containerd[1475]: time="2025-02-13T15:11:21.052992620Z" level=error msg="encountered an error cleaning up failed sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.053736 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 34964 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:21.055727 containerd[1475]: time="2025-02-13T15:11:21.053826262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.056538 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:21.058860 containerd[1475]: time="2025-02-13T15:11:21.054290165Z" level=error msg="Failed to destroy network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.060036 kubelet[2624]: E0213 15:11:21.059990 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.060139 kubelet[2624]: E0213 15:11:21.060046 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:21.060139 kubelet[2624]: E0213 15:11:21.060066 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:21.060139 kubelet[2624]: E0213 15:11:21.060121 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:21.061898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487-shm.mount: Deactivated successfully. Feb 13 15:11:21.066682 containerd[1475]: time="2025-02-13T15:11:21.065527213Z" level=error msg="encountered an error cleaning up failed sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.066682 containerd[1475]: time="2025-02-13T15:11:21.065611258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.066839 kubelet[2624]: E0213 15:11:21.065850 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.066839 kubelet[2624]: E0213 15:11:21.065892 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:21.066839 kubelet[2624]: E0213 15:11:21.065919 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:21.066928 kubelet[2624]: E0213 15:11:21.065969 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" Feb 13 15:11:21.067966 systemd-logind[1454]: New session 9 of user core. Feb 13 15:11:21.073266 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:11:21.095373 containerd[1475]: time="2025-02-13T15:11:21.094862976Z" level=error msg="Failed to destroy network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.095373 containerd[1475]: time="2025-02-13T15:11:21.095253276Z" level=error msg="encountered an error cleaning up failed sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.095373 containerd[1475]: time="2025-02-13T15:11:21.095308639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.095932 kubelet[2624]: E0213 15:11:21.095772 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.095932 kubelet[2624]: E0213 15:11:21.095829 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:21.095932 kubelet[2624]: E0213 15:11:21.095849 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:21.096325 kubelet[2624]: E0213 15:11:21.095908 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" Feb 13 15:11:21.113278 containerd[1475]: time="2025-02-13T15:11:21.113222344Z" level=error msg="Failed to destroy network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.113640 containerd[1475]: time="2025-02-13T15:11:21.113600963Z" level=error msg="encountered an error cleaning up failed sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.113705 containerd[1475]: time="2025-02-13T15:11:21.113683648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.114231 kubelet[2624]: E0213 15:11:21.113906 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.114231 kubelet[2624]: E0213 15:11:21.113964 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:21.114231 kubelet[2624]: E0213 15:11:21.113984 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:21.114350 kubelet[2624]: E0213 15:11:21.114039 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podUID="669d688c-25ab-473d-9d28-45c8a124548b" Feb 13 15:11:21.194615 sshd[4070]: Connection closed by 10.0.0.1 port 34964 Feb 13 15:11:21.195360 sshd-session[3927]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:21.199665 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:34964.service: Deactivated successfully. Feb 13 15:11:21.201580 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:11:21.202739 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:11:21.204851 systemd-logind[1454]: Removed session 9. Feb 13 15:11:21.840582 kubelet[2624]: I0213 15:11:21.840548 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b" Feb 13 15:11:21.841488 containerd[1475]: time="2025-02-13T15:11:21.841360990Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:21.841797 containerd[1475]: time="2025-02-13T15:11:21.841543639Z" level=info msg="Ensure that sandbox 08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b in task-service has been cleanup successfully" Feb 13 15:11:21.841826 containerd[1475]: time="2025-02-13T15:11:21.841764850Z" level=info msg="TearDown network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" successfully" Feb 13 15:11:21.842551 containerd[1475]: time="2025-02-13T15:11:21.842519448Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" returns successfully" Feb 13 15:11:21.843060 containerd[1475]: time="2025-02-13T15:11:21.842882947Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:21.843060 containerd[1475]: time="2025-02-13T15:11:21.842976471Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:21.843060 containerd[1475]: time="2025-02-13T15:11:21.842988472Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:21.843482 containerd[1475]: time="2025-02-13T15:11:21.843349650Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:21.843482 containerd[1475]: time="2025-02-13T15:11:21.843430694Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:21.843482 containerd[1475]: time="2025-02-13T15:11:21.843442535Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:21.843751 kubelet[2624]: I0213 15:11:21.843707 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487" Feb 13 15:11:21.844261 kubelet[2624]: E0213 15:11:21.843866 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:21.844334 containerd[1475]: time="2025-02-13T15:11:21.844189693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:3,}" Feb 13 15:11:21.844334 containerd[1475]: time="2025-02-13T15:11:21.844269537Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:21.844433 containerd[1475]: time="2025-02-13T15:11:21.844410144Z" level=info msg="Ensure that sandbox b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487 in task-service has been cleanup successfully" Feb 13 15:11:21.845238 containerd[1475]: time="2025-02-13T15:11:21.845202064Z" level=info msg="TearDown network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" successfully" Feb 13 15:11:21.845238 containerd[1475]: time="2025-02-13T15:11:21.845229905Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" returns successfully" Feb 13 15:11:21.845585 containerd[1475]: time="2025-02-13T15:11:21.845558802Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:21.845696 containerd[1475]: time="2025-02-13T15:11:21.845671808Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:21.845696 containerd[1475]: time="2025-02-13T15:11:21.845687609Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:21.846417 containerd[1475]: time="2025-02-13T15:11:21.846261478Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:21.846417 containerd[1475]: time="2025-02-13T15:11:21.846353482Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:21.846417 containerd[1475]: time="2025-02-13T15:11:21.846364163Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:21.847046 containerd[1475]: time="2025-02-13T15:11:21.846974274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:3,}" Feb 13 15:11:21.847559 kubelet[2624]: I0213 15:11:21.847445 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b" Feb 13 15:11:21.848134 containerd[1475]: time="2025-02-13T15:11:21.848092610Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:21.848281 containerd[1475]: time="2025-02-13T15:11:21.848260899Z" level=info msg="Ensure that sandbox 41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b in task-service has been cleanup successfully" Feb 13 15:11:21.848810 containerd[1475]: time="2025-02-13T15:11:21.848774525Z" level=info msg="TearDown network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" successfully" Feb 13 15:11:21.848810 containerd[1475]: time="2025-02-13T15:11:21.848797086Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" returns successfully" Feb 13 15:11:21.849230 containerd[1475]: time="2025-02-13T15:11:21.849186625Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:21.849427 containerd[1475]: time="2025-02-13T15:11:21.849271430Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:21.849427 containerd[1475]: time="2025-02-13T15:11:21.849287310Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:21.849859 containerd[1475]: time="2025-02-13T15:11:21.849817577Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:21.850091 kubelet[2624]: I0213 15:11:21.850009 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788" Feb 13 15:11:21.850323 containerd[1475]: time="2025-02-13T15:11:21.850201157Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:21.850323 containerd[1475]: time="2025-02-13T15:11:21.850218998Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:21.851218 containerd[1475]: time="2025-02-13T15:11:21.851185286Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:21.851477 containerd[1475]: time="2025-02-13T15:11:21.851456700Z" level=info msg="Ensure that sandbox 24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788 in task-service has been cleanup successfully" Feb 13 15:11:21.851746 containerd[1475]: time="2025-02-13T15:11:21.851718793Z" level=info msg="TearDown network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" successfully" Feb 13 15:11:21.851746 containerd[1475]: time="2025-02-13T15:11:21.851743675Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" returns successfully" Feb 13 15:11:21.864322 containerd[1475]: time="2025-02-13T15:11:21.864271148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:11:21.864953 containerd[1475]: time="2025-02-13T15:11:21.864847057Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:21.865188 containerd[1475]: time="2025-02-13T15:11:21.865148632Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:21.865300 containerd[1475]: time="2025-02-13T15:11:21.865279199Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:21.865799 containerd[1475]: time="2025-02-13T15:11:21.865776384Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:21.866079 containerd[1475]: time="2025-02-13T15:11:21.865894310Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:21.866079 containerd[1475]: time="2025-02-13T15:11:21.865931152Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:21.866919 containerd[1475]: time="2025-02-13T15:11:21.866885240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:3,}" Feb 13 15:11:21.867448 kubelet[2624]: I0213 15:11:21.867265 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2" Feb 13 15:11:21.869754 containerd[1475]: time="2025-02-13T15:11:21.869720543Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:21.869949 containerd[1475]: time="2025-02-13T15:11:21.869925234Z" level=info msg="Ensure that sandbox 1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2 in task-service has been cleanup successfully" Feb 13 15:11:21.870942 kubelet[2624]: I0213 15:11:21.870918 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757" Feb 13 15:11:21.871490 containerd[1475]: time="2025-02-13T15:11:21.871393548Z" level=info msg="TearDown network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" successfully" Feb 13 15:11:21.871490 containerd[1475]: time="2025-02-13T15:11:21.871456351Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" returns successfully" Feb 13 15:11:21.871613 containerd[1475]: time="2025-02-13T15:11:21.871420749Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:21.871917 containerd[1475]: time="2025-02-13T15:11:21.871814289Z" level=info msg="Ensure that sandbox 222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757 in task-service has been cleanup successfully" Feb 13 15:11:21.872105 containerd[1475]: time="2025-02-13T15:11:21.871998898Z" level=info msg="TearDown network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" successfully" Feb 13 15:11:21.872105 containerd[1475]: time="2025-02-13T15:11:21.872014059Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" returns successfully" Feb 13 15:11:21.872105 containerd[1475]: time="2025-02-13T15:11:21.872001939Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:21.872491 containerd[1475]: time="2025-02-13T15:11:21.872167427Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:21.872491 containerd[1475]: time="2025-02-13T15:11:21.872178028Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:21.872850 containerd[1475]: time="2025-02-13T15:11:21.872721415Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:21.873008 containerd[1475]: time="2025-02-13T15:11:21.872764097Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:21.873008 containerd[1475]: time="2025-02-13T15:11:21.872875423Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:21.873008 containerd[1475]: time="2025-02-13T15:11:21.872904664Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:21.873008 containerd[1475]: time="2025-02-13T15:11:21.872998429Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:21.873975 containerd[1475]: time="2025-02-13T15:11:21.873010950Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:21.873975 containerd[1475]: time="2025-02-13T15:11:21.873460612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:11:21.874244 containerd[1475]: time="2025-02-13T15:11:21.874159248Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:21.874493 containerd[1475]: time="2025-02-13T15:11:21.874416381Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:21.874493 containerd[1475]: time="2025-02-13T15:11:21.874486464Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:21.875281 kubelet[2624]: E0213 15:11:21.874825 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:21.875344 containerd[1475]: time="2025-02-13T15:11:21.875116576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:3,}" Feb 13 15:11:21.974496 containerd[1475]: time="2025-02-13T15:11:21.974321391Z" level=error msg="Failed to destroy network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.974858 containerd[1475]: time="2025-02-13T15:11:21.974824456Z" level=error msg="encountered an error cleaning up failed sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.974914 containerd[1475]: time="2025-02-13T15:11:21.974889459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.975205 kubelet[2624]: E0213 15:11:21.975159 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.975280 kubelet[2624]: E0213 15:11:21.975216 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:21.975280 kubelet[2624]: E0213 15:11:21.975238 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:21.975343 kubelet[2624]: E0213 15:11:21.975293 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" Feb 13 15:11:21.985128 containerd[1475]: time="2025-02-13T15:11:21.984997850Z" level=error msg="Failed to destroy network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.985796 containerd[1475]: time="2025-02-13T15:11:21.985478195Z" level=error msg="encountered an error cleaning up failed sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.985796 containerd[1475]: time="2025-02-13T15:11:21.985546918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.985888 kubelet[2624]: E0213 15:11:21.985805 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:21.985888 kubelet[2624]: E0213 15:11:21.985858 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:21.985888 kubelet[2624]: E0213 15:11:21.985878 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:21.985984 kubelet[2624]: E0213 15:11:21.985932 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-72d96" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" Feb 13 15:11:22.012975 systemd[1]: run-netns-cni\x2db4ebf04e\x2dba3a\x2dfdca\x2dbf1d\x2d8be21982f6cc.mount: Deactivated successfully. Feb 13 15:11:22.013073 systemd[1]: run-netns-cni\x2d809ad95f\x2d78e6\x2d016e\x2df985\x2da1da7fd0d2d5.mount: Deactivated successfully. Feb 13 15:11:22.013124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2-shm.mount: Deactivated successfully. Feb 13 15:11:22.013350 containerd[1475]: time="2025-02-13T15:11:22.013166976Z" level=error msg="Failed to destroy network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.013176 systemd[1]: run-netns-cni\x2d2b58ec38\x2d205f\x2d972e\x2d8043\x2d4d16e4f4943c.mount: Deactivated successfully. Feb 13 15:11:22.013221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b-shm.mount: Deactivated successfully. Feb 13 15:11:22.013290 systemd[1]: run-netns-cni\x2d73c73947\x2d43b4\x2d8edc\x2dfda7\x2d5a101b042da5.mount: Deactivated successfully. Feb 13 15:11:22.013336 systemd[1]: run-netns-cni\x2dd8eb8071\x2d673f\x2da7e6\x2d4cfa\x2d132e2876fcfa.mount: Deactivated successfully. Feb 13 15:11:22.013380 systemd[1]: run-netns-cni\x2d302149c1\x2dac67\x2d1f5c\x2dbcb8\x2df0fc05f2ea90.mount: Deactivated successfully. Feb 13 15:11:22.013568 containerd[1475]: time="2025-02-13T15:11:22.013527273Z" level=error msg="encountered an error cleaning up failed sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.013722 containerd[1475]: time="2025-02-13T15:11:22.013589477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.015828 kubelet[2624]: E0213 15:11:22.015800 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.015906 kubelet[2624]: E0213 15:11:22.015854 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:22.015906 kubelet[2624]: E0213 15:11:22.015877 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:22.015960 kubelet[2624]: E0213 15:11:22.015930 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podUID="669d688c-25ab-473d-9d28-45c8a124548b" Feb 13 15:11:22.017342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b-shm.mount: Deactivated successfully. Feb 13 15:11:22.024978 containerd[1475]: time="2025-02-13T15:11:22.024764864Z" level=error msg="Failed to destroy network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.025351 containerd[1475]: time="2025-02-13T15:11:22.025325491Z" level=error msg="encountered an error cleaning up failed sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.025580 containerd[1475]: time="2025-02-13T15:11:22.025467418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.025837 kubelet[2624]: E0213 15:11:22.025814 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.026132 kubelet[2624]: E0213 15:11:22.026017 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:22.026132 kubelet[2624]: E0213 15:11:22.026055 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:22.026132 kubelet[2624]: E0213 15:11:22.026111 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:22.027115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f-shm.mount: Deactivated successfully. Feb 13 15:11:22.035447 containerd[1475]: time="2025-02-13T15:11:22.035409225Z" level=error msg="Failed to destroy network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.035871 containerd[1475]: time="2025-02-13T15:11:22.035846806Z" level=error msg="encountered an error cleaning up failed sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.036195 containerd[1475]: time="2025-02-13T15:11:22.036168822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.036600 kubelet[2624]: E0213 15:11:22.036582 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.036739 kubelet[2624]: E0213 15:11:22.036728 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:22.036851 kubelet[2624]: E0213 15:11:22.036840 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:22.037164 kubelet[2624]: E0213 15:11:22.037147 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" Feb 13 15:11:22.037455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce-shm.mount: Deactivated successfully. Feb 13 15:11:22.047803 containerd[1475]: time="2025-02-13T15:11:22.047680346Z" level=error msg="Failed to destroy network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.048500 containerd[1475]: time="2025-02-13T15:11:22.048203251Z" level=error msg="encountered an error cleaning up failed sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.048500 containerd[1475]: time="2025-02-13T15:11:22.048269574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.048749 kubelet[2624]: E0213 15:11:22.048729 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:22.048881 kubelet[2624]: E0213 15:11:22.048868 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:22.049026 kubelet[2624]: E0213 15:11:22.048936 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:22.049229 kubelet[2624]: E0213 15:11:22.049160 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-497kt" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" Feb 13 15:11:22.050457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408-shm.mount: Deactivated successfully. Feb 13 15:11:22.875538 kubelet[2624]: I0213 15:11:22.875494 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f" Feb 13 15:11:22.876721 containerd[1475]: time="2025-02-13T15:11:22.876042139Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" Feb 13 15:11:22.876721 containerd[1475]: time="2025-02-13T15:11:22.876221148Z" level=info msg="Ensure that sandbox 24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f in task-service has been cleanup successfully" Feb 13 15:11:22.877545 containerd[1475]: time="2025-02-13T15:11:22.877080830Z" level=info msg="TearDown network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" successfully" Feb 13 15:11:22.877545 containerd[1475]: time="2025-02-13T15:11:22.877110792Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" returns successfully" Feb 13 15:11:22.877545 containerd[1475]: time="2025-02-13T15:11:22.877401206Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:22.877545 containerd[1475]: time="2025-02-13T15:11:22.877488170Z" level=info msg="TearDown network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" successfully" Feb 13 15:11:22.877545 containerd[1475]: time="2025-02-13T15:11:22.877498651Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" returns successfully" Feb 13 15:11:22.878536 containerd[1475]: time="2025-02-13T15:11:22.878256768Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:22.878536 containerd[1475]: time="2025-02-13T15:11:22.878340172Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:22.878536 containerd[1475]: time="2025-02-13T15:11:22.878350572Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:22.878750 containerd[1475]: time="2025-02-13T15:11:22.878720710Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:22.878814 containerd[1475]: time="2025-02-13T15:11:22.878796874Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:22.878979 containerd[1475]: time="2025-02-13T15:11:22.878812155Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:22.879180 kubelet[2624]: I0213 15:11:22.879145 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce" Feb 13 15:11:22.879713 containerd[1475]: time="2025-02-13T15:11:22.879687998Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" Feb 13 15:11:22.880172 containerd[1475]: time="2025-02-13T15:11:22.880008894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:4,}" Feb 13 15:11:22.880232 containerd[1475]: time="2025-02-13T15:11:22.880093378Z" level=info msg="Ensure that sandbox 9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce in task-service has been cleanup successfully" Feb 13 15:11:22.880530 containerd[1475]: time="2025-02-13T15:11:22.880506078Z" level=info msg="TearDown network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" successfully" Feb 13 15:11:22.880587 containerd[1475]: time="2025-02-13T15:11:22.880529839Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" returns successfully" Feb 13 15:11:22.881551 containerd[1475]: time="2025-02-13T15:11:22.881328118Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:22.881551 containerd[1475]: time="2025-02-13T15:11:22.881404362Z" level=info msg="TearDown network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" successfully" Feb 13 15:11:22.881551 containerd[1475]: time="2025-02-13T15:11:22.881414482Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" returns successfully" Feb 13 15:11:22.881921 containerd[1475]: time="2025-02-13T15:11:22.881894466Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:22.882063 containerd[1475]: time="2025-02-13T15:11:22.881981510Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:22.882063 containerd[1475]: time="2025-02-13T15:11:22.881996831Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:22.882490 containerd[1475]: time="2025-02-13T15:11:22.882468734Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:22.883292 containerd[1475]: time="2025-02-13T15:11:22.883208050Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:22.883292 containerd[1475]: time="2025-02-13T15:11:22.883225451Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:22.883292 containerd[1475]: time="2025-02-13T15:11:22.883240092Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" Feb 13 15:11:22.883423 kubelet[2624]: I0213 15:11:22.882755 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408" Feb 13 15:11:22.883459 containerd[1475]: time="2025-02-13T15:11:22.883411620Z" level=info msg="Ensure that sandbox 02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408 in task-service has been cleanup successfully" Feb 13 15:11:22.883922 containerd[1475]: time="2025-02-13T15:11:22.883898484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:11:22.884212 containerd[1475]: time="2025-02-13T15:11:22.883951567Z" level=info msg="TearDown network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" successfully" Feb 13 15:11:22.884423 containerd[1475]: time="2025-02-13T15:11:22.884207579Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" returns successfully" Feb 13 15:11:22.884851 containerd[1475]: time="2025-02-13T15:11:22.884829890Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:22.885123 containerd[1475]: time="2025-02-13T15:11:22.885099583Z" level=info msg="TearDown network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" successfully" Feb 13 15:11:22.885176 containerd[1475]: time="2025-02-13T15:11:22.885121424Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" returns successfully" Feb 13 15:11:22.885527 containerd[1475]: time="2025-02-13T15:11:22.885498282Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:22.885711 containerd[1475]: time="2025-02-13T15:11:22.885691972Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:22.885789 containerd[1475]: time="2025-02-13T15:11:22.885775736Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:22.887844 containerd[1475]: time="2025-02-13T15:11:22.887821076Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:22.888394 containerd[1475]: time="2025-02-13T15:11:22.888184694Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:22.888394 containerd[1475]: time="2025-02-13T15:11:22.888202255Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:22.888838 kubelet[2624]: E0213 15:11:22.888810 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:22.889101 kubelet[2624]: I0213 15:11:22.889076 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb" Feb 13 15:11:22.889336 containerd[1475]: time="2025-02-13T15:11:22.889314509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:4,}" Feb 13 15:11:22.889828 containerd[1475]: time="2025-02-13T15:11:22.889797693Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" Feb 13 15:11:22.890340 containerd[1475]: time="2025-02-13T15:11:22.890308158Z" level=info msg="Ensure that sandbox a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb in task-service has been cleanup successfully" Feb 13 15:11:22.891770 kubelet[2624]: I0213 15:11:22.891745 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b" Feb 13 15:11:22.891853 containerd[1475]: time="2025-02-13T15:11:22.890729738Z" level=info msg="TearDown network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" successfully" Feb 13 15:11:22.891853 containerd[1475]: time="2025-02-13T15:11:22.891770349Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" returns successfully" Feb 13 15:11:22.892678 containerd[1475]: time="2025-02-13T15:11:22.892386619Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" Feb 13 15:11:22.892678 containerd[1475]: time="2025-02-13T15:11:22.892542787Z" level=info msg="Ensure that sandbox d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b in task-service has been cleanup successfully" Feb 13 15:11:22.892945 containerd[1475]: time="2025-02-13T15:11:22.892925366Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:22.893126 containerd[1475]: time="2025-02-13T15:11:22.893064333Z" level=info msg="TearDown network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" successfully" Feb 13 15:11:22.893285 containerd[1475]: time="2025-02-13T15:11:22.893181178Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" returns successfully" Feb 13 15:11:22.893624 containerd[1475]: time="2025-02-13T15:11:22.893459952Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:22.893624 containerd[1475]: time="2025-02-13T15:11:22.893531316Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:22.893624 containerd[1475]: time="2025-02-13T15:11:22.893540076Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:22.893802 containerd[1475]: time="2025-02-13T15:11:22.893783448Z" level=info msg="TearDown network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" successfully" Feb 13 15:11:22.893857 containerd[1475]: time="2025-02-13T15:11:22.893845211Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" returns successfully" Feb 13 15:11:22.894285 containerd[1475]: time="2025-02-13T15:11:22.894264471Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:22.894706 containerd[1475]: time="2025-02-13T15:11:22.894264831Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:22.894706 containerd[1475]: time="2025-02-13T15:11:22.894615209Z" level=info msg="TearDown network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" successfully" Feb 13 15:11:22.894706 containerd[1475]: time="2025-02-13T15:11:22.894623529Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" returns successfully" Feb 13 15:11:22.896388 containerd[1475]: time="2025-02-13T15:11:22.895248960Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:22.896388 containerd[1475]: time="2025-02-13T15:11:22.895476171Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:22.896388 containerd[1475]: time="2025-02-13T15:11:22.895492812Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:22.896388 containerd[1475]: time="2025-02-13T15:11:22.896134563Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" Feb 13 15:11:22.896388 containerd[1475]: time="2025-02-13T15:11:22.896270130Z" level=info msg="Ensure that sandbox 0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d in task-service has been cleanup successfully" Feb 13 15:11:22.896534 kubelet[2624]: I0213 15:11:22.895308 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d" Feb 13 15:11:22.896610 containerd[1475]: time="2025-02-13T15:11:22.896590585Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:22.896706 containerd[1475]: time="2025-02-13T15:11:22.896691590Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:22.896930 kubelet[2624]: E0213 15:11:22.896914 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:22.897076 containerd[1475]: time="2025-02-13T15:11:22.897048688Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:22.897488 containerd[1475]: time="2025-02-13T15:11:22.897469868Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:22.897577 containerd[1475]: time="2025-02-13T15:11:22.897563553Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:22.897663 containerd[1475]: time="2025-02-13T15:11:22.897217096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:4,}" Feb 13 15:11:22.898421 containerd[1475]: time="2025-02-13T15:11:22.898391273Z" level=info msg="TearDown network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" successfully" Feb 13 15:11:22.898505 containerd[1475]: time="2025-02-13T15:11:22.898491878Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" returns successfully" Feb 13 15:11:22.899236 containerd[1475]: time="2025-02-13T15:11:22.898978462Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:22.899236 containerd[1475]: time="2025-02-13T15:11:22.899050586Z" level=info msg="TearDown network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" successfully" Feb 13 15:11:22.899236 containerd[1475]: time="2025-02-13T15:11:22.899068027Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" returns successfully" Feb 13 15:11:22.899842 containerd[1475]: time="2025-02-13T15:11:22.899588132Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:22.900179 containerd[1475]: time="2025-02-13T15:11:22.900156760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:11:22.900669 containerd[1475]: time="2025-02-13T15:11:22.900464415Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:22.900669 containerd[1475]: time="2025-02-13T15:11:22.900482576Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:22.901089 containerd[1475]: time="2025-02-13T15:11:22.900971560Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:22.901089 containerd[1475]: time="2025-02-13T15:11:22.901040923Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:22.901089 containerd[1475]: time="2025-02-13T15:11:22.901049924Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:22.903020 containerd[1475]: time="2025-02-13T15:11:22.902880053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:4,}" Feb 13 15:11:23.008730 containerd[1475]: time="2025-02-13T15:11:23.008638580Z" level=error msg="Failed to destroy network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.009166 containerd[1475]: time="2025-02-13T15:11:23.009135964Z" level=error msg="encountered an error cleaning up failed sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.009295 containerd[1475]: time="2025-02-13T15:11:23.009274211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.009697 kubelet[2624]: E0213 15:11:23.009613 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.009697 kubelet[2624]: E0213 15:11:23.009695 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:23.009794 kubelet[2624]: E0213 15:11:23.009717 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:23.009794 kubelet[2624]: E0213 15:11:23.009771 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:23.014472 systemd[1]: run-netns-cni\x2d96d80325\x2d90b6\x2d6049\x2d9ac3\x2d8e13fa6a95bb.mount: Deactivated successfully. Feb 13 15:11:23.014562 systemd[1]: run-netns-cni\x2d72026d8e\x2dc818\x2d157d\x2d727e\x2dfff007d94e36.mount: Deactivated successfully. Feb 13 15:11:23.014608 systemd[1]: run-netns-cni\x2d89f799b1\x2d5db5\x2df209\x2d4586\x2d1073e8854988.mount: Deactivated successfully. Feb 13 15:11:23.014698 systemd[1]: run-netns-cni\x2dadae6eba\x2dddae\x2db41d\x2d2888\x2d8e642da8a6e1.mount: Deactivated successfully. Feb 13 15:11:23.014747 systemd[1]: run-netns-cni\x2df442fe82\x2d7a56\x2d09a3\x2d9332\x2d469b2db8bd25.mount: Deactivated successfully. Feb 13 15:11:23.014789 systemd[1]: run-netns-cni\x2d80a256e7\x2dd090\x2d1842\x2dd5a8\x2da6cca148b474.mount: Deactivated successfully. Feb 13 15:11:23.019163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e-shm.mount: Deactivated successfully. Feb 13 15:11:23.035037 containerd[1475]: time="2025-02-13T15:11:23.034971230Z" level=error msg="Failed to destroy network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.035545 containerd[1475]: time="2025-02-13T15:11:23.035389330Z" level=error msg="encountered an error cleaning up failed sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.035545 containerd[1475]: time="2025-02-13T15:11:23.035461254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.036048 kubelet[2624]: E0213 15:11:23.036026 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.036098 kubelet[2624]: E0213 15:11:23.036078 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:23.036098 kubelet[2624]: E0213 15:11:23.036097 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:23.036154 kubelet[2624]: E0213 15:11:23.036145 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" Feb 13 15:11:23.037676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485-shm.mount: Deactivated successfully. Feb 13 15:11:23.046771 containerd[1475]: time="2025-02-13T15:11:23.046394252Z" level=error msg="Failed to destroy network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.046896 containerd[1475]: time="2025-02-13T15:11:23.046861995Z" level=error msg="encountered an error cleaning up failed sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.046965 containerd[1475]: time="2025-02-13T15:11:23.046939718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.047194 kubelet[2624]: E0213 15:11:23.047174 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.047240 kubelet[2624]: E0213 15:11:23.047230 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:23.047263 kubelet[2624]: E0213 15:11:23.047254 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:23.047315 kubelet[2624]: E0213 15:11:23.047304 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-72d96" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" Feb 13 15:11:23.049375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2-shm.mount: Deactivated successfully. Feb 13 15:11:23.050319 containerd[1475]: time="2025-02-13T15:11:23.050280437Z" level=error msg="Failed to destroy network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.050720 containerd[1475]: time="2025-02-13T15:11:23.050665855Z" level=error msg="encountered an error cleaning up failed sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.050790 containerd[1475]: time="2025-02-13T15:11:23.050753939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.051390 kubelet[2624]: E0213 15:11:23.051227 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.051390 kubelet[2624]: E0213 15:11:23.051274 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:23.051390 kubelet[2624]: E0213 15:11:23.051304 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:23.051489 kubelet[2624]: E0213 15:11:23.051364 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-497kt" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" Feb 13 15:11:23.053556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb-shm.mount: Deactivated successfully. Feb 13 15:11:23.177756 containerd[1475]: time="2025-02-13T15:11:23.177571399Z" level=error msg="Failed to destroy network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.178804 containerd[1475]: time="2025-02-13T15:11:23.178044181Z" level=error msg="encountered an error cleaning up failed sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.178804 containerd[1475]: time="2025-02-13T15:11:23.178214949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.178944 kubelet[2624]: E0213 15:11:23.178855 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.178944 kubelet[2624]: E0213 15:11:23.178910 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:23.178944 kubelet[2624]: E0213 15:11:23.178929 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:23.179025 kubelet[2624]: E0213 15:11:23.178978 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podUID="669d688c-25ab-473d-9d28-45c8a124548b" Feb 13 15:11:23.187715 kubelet[2624]: I0213 15:11:23.187679 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:23.188296 kubelet[2624]: E0213 15:11:23.188277 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:23.202187 containerd[1475]: time="2025-02-13T15:11:23.202138485Z" level=error msg="Failed to destroy network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.203001 containerd[1475]: time="2025-02-13T15:11:23.202970284Z" level=error msg="encountered an error cleaning up failed sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.204263 containerd[1475]: time="2025-02-13T15:11:23.204120379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.205096 kubelet[2624]: E0213 15:11:23.205072 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:23.205252 kubelet[2624]: E0213 15:11:23.205239 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:23.205339 kubelet[2624]: E0213 15:11:23.205330 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:23.205452 kubelet[2624]: E0213 15:11:23.205441 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" Feb 13 15:11:23.615739 containerd[1475]: time="2025-02-13T15:11:23.615682794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:11:23.619812 containerd[1475]: time="2025-02-13T15:11:23.619095996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.829018513s" Feb 13 15:11:23.619812 containerd[1475]: time="2025-02-13T15:11:23.619131358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:11:23.625843 containerd[1475]: time="2025-02-13T15:11:23.625800194Z" level=info msg="CreateContainer within sandbox \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:11:23.656504 containerd[1475]: time="2025-02-13T15:11:23.656454489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:23.657172 containerd[1475]: time="2025-02-13T15:11:23.657090239Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:23.657765 containerd[1475]: time="2025-02-13T15:11:23.657594623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:23.673924 containerd[1475]: time="2025-02-13T15:11:23.673885997Z" level=info msg="CreateContainer within sandbox \"5569c3b8172481097548c1ae5ab14b5a085f89b131a73b673475a9b552df224a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5346e840f06adc17ffd11b12a1238bd910ebc077714cd97d3d8c396a1992db02\"" Feb 13 15:11:23.674744 containerd[1475]: time="2025-02-13T15:11:23.674718556Z" level=info msg="StartContainer for \"5346e840f06adc17ffd11b12a1238bd910ebc077714cd97d3d8c396a1992db02\"" Feb 13 15:11:23.727831 systemd[1]: Started cri-containerd-5346e840f06adc17ffd11b12a1238bd910ebc077714cd97d3d8c396a1992db02.scope - libcontainer container 5346e840f06adc17ffd11b12a1238bd910ebc077714cd97d3d8c396a1992db02. Feb 13 15:11:23.753160 containerd[1475]: time="2025-02-13T15:11:23.753112997Z" level=info msg="StartContainer for \"5346e840f06adc17ffd11b12a1238bd910ebc077714cd97d3d8c396a1992db02\" returns successfully" Feb 13 15:11:23.901183 kubelet[2624]: I0213 15:11:23.901068 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8" Feb 13 15:11:23.903012 containerd[1475]: time="2025-02-13T15:11:23.902938989Z" level=info msg="StopPodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\"" Feb 13 15:11:23.903361 containerd[1475]: time="2025-02-13T15:11:23.903143878Z" level=info msg="Ensure that sandbox 71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8 in task-service has been cleanup successfully" Feb 13 15:11:23.903361 containerd[1475]: time="2025-02-13T15:11:23.903300006Z" level=info msg="TearDown network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" successfully" Feb 13 15:11:23.903361 containerd[1475]: time="2025-02-13T15:11:23.903314166Z" level=info msg="StopPodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" returns successfully" Feb 13 15:11:23.904218 containerd[1475]: time="2025-02-13T15:11:23.904191728Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" Feb 13 15:11:23.904319 containerd[1475]: time="2025-02-13T15:11:23.904283292Z" level=info msg="TearDown network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" successfully" Feb 13 15:11:23.904319 containerd[1475]: time="2025-02-13T15:11:23.904298173Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" returns successfully" Feb 13 15:11:23.904608 containerd[1475]: time="2025-02-13T15:11:23.904587987Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:23.904760 containerd[1475]: time="2025-02-13T15:11:23.904685752Z" level=info msg="TearDown network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" successfully" Feb 13 15:11:23.904760 containerd[1475]: time="2025-02-13T15:11:23.904709313Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" returns successfully" Feb 13 15:11:23.904964 containerd[1475]: time="2025-02-13T15:11:23.904941684Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:23.905093 containerd[1475]: time="2025-02-13T15:11:23.905048169Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:23.905093 containerd[1475]: time="2025-02-13T15:11:23.905064690Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:23.905484 containerd[1475]: time="2025-02-13T15:11:23.905397385Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:23.905730 containerd[1475]: time="2025-02-13T15:11:23.905544672Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:23.905730 containerd[1475]: time="2025-02-13T15:11:23.905560873Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:23.906251 containerd[1475]: time="2025-02-13T15:11:23.906206864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:5,}" Feb 13 15:11:23.906798 kubelet[2624]: I0213 15:11:23.906763 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e" Feb 13 15:11:23.908178 containerd[1475]: time="2025-02-13T15:11:23.907492605Z" level=info msg="StopPodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\"" Feb 13 15:11:23.908178 containerd[1475]: time="2025-02-13T15:11:23.907770778Z" level=info msg="Ensure that sandbox d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e in task-service has been cleanup successfully" Feb 13 15:11:23.908178 containerd[1475]: time="2025-02-13T15:11:23.908075712Z" level=info msg="TearDown network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" successfully" Feb 13 15:11:23.908178 containerd[1475]: time="2025-02-13T15:11:23.908092993Z" level=info msg="StopPodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" returns successfully" Feb 13 15:11:23.908414 containerd[1475]: time="2025-02-13T15:11:23.908368366Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" Feb 13 15:11:23.908452 containerd[1475]: time="2025-02-13T15:11:23.908439690Z" level=info msg="TearDown network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" successfully" Feb 13 15:11:23.908452 containerd[1475]: time="2025-02-13T15:11:23.908449090Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" returns successfully" Feb 13 15:11:23.908848 containerd[1475]: time="2025-02-13T15:11:23.908824148Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:23.909005 containerd[1475]: time="2025-02-13T15:11:23.908964835Z" level=info msg="TearDown network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" successfully" Feb 13 15:11:23.909005 containerd[1475]: time="2025-02-13T15:11:23.908982075Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" returns successfully" Feb 13 15:11:23.909476 containerd[1475]: time="2025-02-13T15:11:23.909455058Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:23.909544 containerd[1475]: time="2025-02-13T15:11:23.909530141Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:23.909572 containerd[1475]: time="2025-02-13T15:11:23.909544222Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:23.910576 containerd[1475]: time="2025-02-13T15:11:23.910550350Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:23.911081 containerd[1475]: time="2025-02-13T15:11:23.911058654Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:23.911081 containerd[1475]: time="2025-02-13T15:11:23.911080575Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:23.912356 kubelet[2624]: I0213 15:11:23.912323 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e" Feb 13 15:11:23.912853 containerd[1475]: time="2025-02-13T15:11:23.912829218Z" level=info msg="StopPodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\"" Feb 13 15:11:23.913144 containerd[1475]: time="2025-02-13T15:11:23.912972145Z" level=info msg="Ensure that sandbox ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e in task-service has been cleanup successfully" Feb 13 15:11:23.913889 containerd[1475]: time="2025-02-13T15:11:23.913267159Z" level=info msg="TearDown network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" successfully" Feb 13 15:11:23.913889 containerd[1475]: time="2025-02-13T15:11:23.913285360Z" level=info msg="StopPodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" returns successfully" Feb 13 15:11:23.915028 containerd[1475]: time="2025-02-13T15:11:23.914320529Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" Feb 13 15:11:23.915028 containerd[1475]: time="2025-02-13T15:11:23.914485537Z" level=info msg="TearDown network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" successfully" Feb 13 15:11:23.915028 containerd[1475]: time="2025-02-13T15:11:23.914499977Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" returns successfully" Feb 13 15:11:23.915696 containerd[1475]: time="2025-02-13T15:11:23.915351898Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:23.915696 containerd[1475]: time="2025-02-13T15:11:23.915430142Z" level=info msg="TearDown network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" successfully" Feb 13 15:11:23.915696 containerd[1475]: time="2025-02-13T15:11:23.915439902Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" returns successfully" Feb 13 15:11:23.917982 containerd[1475]: time="2025-02-13T15:11:23.917869137Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:23.917982 containerd[1475]: time="2025-02-13T15:11:23.917962102Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:23.917982 containerd[1475]: time="2025-02-13T15:11:23.917971542Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:23.918637 containerd[1475]: time="2025-02-13T15:11:23.918333519Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:23.918637 containerd[1475]: time="2025-02-13T15:11:23.918440804Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:23.918965 containerd[1475]: time="2025-02-13T15:11:23.918457365Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:23.920921 containerd[1475]: time="2025-02-13T15:11:23.920861759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:5,}" Feb 13 15:11:23.922023 containerd[1475]: time="2025-02-13T15:11:23.921562793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:11:23.923439 kubelet[2624]: E0213 15:11:23.923411 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:23.931620 kubelet[2624]: I0213 15:11:23.931580 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485" Feb 13 15:11:23.933927 containerd[1475]: time="2025-02-13T15:11:23.933212626Z" level=info msg="StopPodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\"" Feb 13 15:11:23.933927 containerd[1475]: time="2025-02-13T15:11:23.933374313Z" level=info msg="Ensure that sandbox 70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485 in task-service has been cleanup successfully" Feb 13 15:11:23.936846 containerd[1475]: time="2025-02-13T15:11:23.936055400Z" level=info msg="TearDown network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" successfully" Feb 13 15:11:23.936846 containerd[1475]: time="2025-02-13T15:11:23.936087962Z" level=info msg="StopPodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" returns successfully" Feb 13 15:11:23.940382 kubelet[2624]: I0213 15:11:23.940135 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb" Feb 13 15:11:23.949934 containerd[1475]: time="2025-02-13T15:11:23.944670489Z" level=info msg="StopPodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\"" Feb 13 15:11:23.949934 containerd[1475]: time="2025-02-13T15:11:23.944989305Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" Feb 13 15:11:23.949934 containerd[1475]: time="2025-02-13T15:11:23.945074509Z" level=info msg="TearDown network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" successfully" Feb 13 15:11:23.949934 containerd[1475]: time="2025-02-13T15:11:23.945085269Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" returns successfully" Feb 13 15:11:23.949934 containerd[1475]: time="2025-02-13T15:11:23.945173233Z" level=info msg="Ensure that sandbox a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb in task-service has been cleanup successfully" Feb 13 15:11:23.952190 containerd[1475]: time="2025-02-13T15:11:23.952143524Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:23.952350 containerd[1475]: time="2025-02-13T15:11:23.952247249Z" level=info msg="TearDown network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" successfully" Feb 13 15:11:23.952350 containerd[1475]: time="2025-02-13T15:11:23.952258330Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" returns successfully" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.953004605Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.953120851Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.953138211Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.954401871Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.954502756Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.954514157Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.955295594Z" level=info msg="TearDown network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" successfully" Feb 13 15:11:23.957140 containerd[1475]: time="2025-02-13T15:11:23.955321475Z" level=info msg="StopPodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" returns successfully" Feb 13 15:11:23.957428 kubelet[2624]: I0213 15:11:23.954665 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2" Feb 13 15:11:23.957428 kubelet[2624]: E0213 15:11:23.956199 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:23.957595 containerd[1475]: time="2025-02-13T15:11:23.957156602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:11:23.957595 containerd[1475]: time="2025-02-13T15:11:23.957159002Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" Feb 13 15:11:23.957595 containerd[1475]: time="2025-02-13T15:11:23.957354491Z" level=info msg="TearDown network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" successfully" Feb 13 15:11:23.957595 containerd[1475]: time="2025-02-13T15:11:23.957364532Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" returns successfully" Feb 13 15:11:23.957732 containerd[1475]: time="2025-02-13T15:11:23.957628904Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:23.957732 containerd[1475]: time="2025-02-13T15:11:23.957699708Z" level=info msg="StopPodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\"" Feb 13 15:11:23.957820 containerd[1475]: time="2025-02-13T15:11:23.957740270Z" level=info msg="TearDown network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" successfully" Feb 13 15:11:23.957820 containerd[1475]: time="2025-02-13T15:11:23.957752070Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" returns successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.957946320Z" level=info msg="Ensure that sandbox f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2 in task-service has been cleanup successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.958080086Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.958145049Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.958154329Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.958774399Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.958844362Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.958853363Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.959823969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:5,}" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.960140024Z" level=info msg="TearDown network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.960156464Z" level=info msg="StopPodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" returns successfully" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.960548283Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" Feb 13 15:11:23.960662 containerd[1475]: time="2025-02-13T15:11:23.960668849Z" level=info msg="TearDown network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" successfully" Feb 13 15:11:23.963970 kubelet[2624]: E0213 15:11:23.959037 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:23.963970 kubelet[2624]: E0213 15:11:23.963205 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.960679969Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" returns successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.960994584Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.961100749Z" level=info msg="TearDown network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.961111350Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" returns successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.961889467Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.962186441Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.962201722Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.962736307Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.962974478Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.962987599Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:23.964059 containerd[1475]: time="2025-02-13T15:11:23.963534225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:5,}" Feb 13 15:11:23.964626 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:11:23.964749 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:11:24.011553 systemd[1]: run-netns-cni\x2d757f9ad6\x2de844\x2da1d8\x2dd6d5\x2da2b394ed8fde.mount: Deactivated successfully. Feb 13 15:11:24.011822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e-shm.mount: Deactivated successfully. Feb 13 15:11:24.011953 systemd[1]: run-netns-cni\x2d0f8649aa\x2dd4f4\x2dee54\x2d6306\x2de1f2c0f5656c.mount: Deactivated successfully. Feb 13 15:11:24.012022 systemd[1]: run-netns-cni\x2d70e121e7\x2dd9c9\x2d9ae0\x2d654e\x2d39dee9b451c5.mount: Deactivated successfully. Feb 13 15:11:24.012370 systemd[1]: run-netns-cni\x2dca5c8dcd\x2d2901\x2de43c\x2d3d04\x2de978c2a9f440.mount: Deactivated successfully. Feb 13 15:11:24.012439 systemd[1]: run-netns-cni\x2dc6e9518f\x2d1798\x2da174\x2d5bfb\x2db2bb9535df21.mount: Deactivated successfully. Feb 13 15:11:24.012487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249590294.mount: Deactivated successfully. Feb 13 15:11:24.036878 containerd[1475]: time="2025-02-13T15:11:24.036747811Z" level=error msg="Failed to destroy network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.037471 containerd[1475]: time="2025-02-13T15:11:24.037434602Z" level=error msg="encountered an error cleaning up failed sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.037529 containerd[1475]: time="2025-02-13T15:11:24.037501126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.039975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc-shm.mount: Deactivated successfully. Feb 13 15:11:24.044700 kubelet[2624]: E0213 15:11:24.044126 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.044700 kubelet[2624]: E0213 15:11:24.044186 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:24.044700 kubelet[2624]: E0213 15:11:24.044206 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" Feb 13 15:11:24.045027 kubelet[2624]: E0213 15:11:24.044258 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4c875978-b5v57_calico-system(ffa4f9e1-7dbe-408b-9c49-96a8006df152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podUID="ffa4f9e1-7dbe-408b-9c49-96a8006df152" Feb 13 15:11:24.349361 kubelet[2624]: I0213 15:11:24.349239 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-bplkh" podStartSLOduration=1.400193639 podStartE2EDuration="17.349195284s" podCreationTimestamp="2025-02-13 15:11:07 +0000 UTC" firstStartedPulling="2025-02-13 15:11:07.670353483 +0000 UTC m=+21.053753154" lastFinishedPulling="2025-02-13 15:11:23.619355128 +0000 UTC m=+37.002754799" observedRunningTime="2025-02-13 15:11:23.951128076 +0000 UTC m=+37.334527747" watchObservedRunningTime="2025-02-13 15:11:24.349195284 +0000 UTC m=+37.732594955" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.346 [INFO][4803] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.347 [INFO][4803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" iface="eth0" netns="/var/run/netns/cni-ac07e907-6e2f-13ce-c2db-91c61b160ba2" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.356 [INFO][4803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" iface="eth0" netns="/var/run/netns/cni-ac07e907-6e2f-13ce-c2db-91c61b160ba2" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.359 [INFO][4803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" iface="eth0" netns="/var/run/netns/cni-ac07e907-6e2f-13ce-c2db-91c61b160ba2" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.359 [INFO][4803] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.367 [INFO][4803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.636 [INFO][4839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" HandleID="k8s-pod-network.0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Workload="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.636 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.637 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.650 [WARNING][4839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" HandleID="k8s-pod-network.0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Workload="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.650 [INFO][4839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" HandleID="k8s-pod-network.0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Workload="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.654 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:24.659239 containerd[1475]: 2025-02-13 15:11:24.657 [INFO][4803] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25" Feb 13 15:11:24.670671 containerd[1475]: time="2025-02-13T15:11:24.670484845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.670805 kubelet[2624]: E0213 15:11:24.670767 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.670849 kubelet[2624]: E0213 15:11:24.670824 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:24.670849 kubelet[2624]: E0213 15:11:24.670848 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-72d96" Feb 13 15:11:24.670907 kubelet[2624]: E0213 15:11:24.670902 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-72d96_kube-system(9a547d44-0314-4436-850e-6c8fdf4e6cfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f56818afa18f7814c3c65eb87681184aebcd6ca52a1fa54cee2735e8556fd25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-72d96" podUID="9a547d44-0314-4436-850e-6c8fdf4e6cfd" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.350 [INFO][4770] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.350 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" iface="eth0" netns="/var/run/netns/cni-c2034dc6-5ae3-e62a-9fbb-fa1fd4d9fc9c" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.350 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" iface="eth0" netns="/var/run/netns/cni-c2034dc6-5ae3-e62a-9fbb-fa1fd4d9fc9c" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.356 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" iface="eth0" netns="/var/run/netns/cni-c2034dc6-5ae3-e62a-9fbb-fa1fd4d9fc9c" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.356 [INFO][4770] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.356 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.637 [INFO][4836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" HandleID="k8s-pod-network.0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Workload="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.637 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.654 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.667 [WARNING][4836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" HandleID="k8s-pod-network.0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Workload="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.667 [INFO][4836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" HandleID="k8s-pod-network.0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Workload="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.668 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:24.675753 containerd[1475]: 2025-02-13 15:11:24.672 [INFO][4770] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01" Feb 13 15:11:24.681615 containerd[1475]: time="2025-02-13T15:11:24.681062373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.681768 kubelet[2624]: E0213 15:11:24.681294 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.681768 kubelet[2624]: E0213 15:11:24.681347 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:24.681768 kubelet[2624]: E0213 15:11:24.681367 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-497kt" Feb 13 15:11:24.681861 kubelet[2624]: E0213 15:11:24.681414 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-497kt_kube-system(b1324758-fb3a-44a6-944b-64a2fbd93ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0249d1324cadfa186915592dae119450162c0291dc0ad86efc09e21ab1ad7e01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-497kt" podUID="b1324758-fb3a-44a6-944b-64a2fbd93ce8" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.354 [INFO][4814] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.354 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" iface="eth0" netns="/var/run/netns/cni-c0111e20-8bfb-fadf-7f2c-07472a1709d0" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.355 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" iface="eth0" netns="/var/run/netns/cni-c0111e20-8bfb-fadf-7f2c-07472a1709d0" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.357 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" iface="eth0" netns="/var/run/netns/cni-c0111e20-8bfb-fadf-7f2c-07472a1709d0" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.357 [INFO][4814] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.357 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.636 [INFO][4837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" HandleID="k8s-pod-network.52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Workload="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.637 [INFO][4837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.668 [INFO][4837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.678 [WARNING][4837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" HandleID="k8s-pod-network.52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Workload="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.678 [INFO][4837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" HandleID="k8s-pod-network.52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Workload="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.680 [INFO][4837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:24.684904 containerd[1475]: 2025-02-13 15:11:24.683 [INFO][4814] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538" Feb 13 15:11:24.687310 containerd[1475]: time="2025-02-13T15:11:24.687262818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.687516 kubelet[2624]: E0213 15:11:24.687490 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.687566 kubelet[2624]: E0213 15:11:24.687542 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:24.687566 kubelet[2624]: E0213 15:11:24.687562 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" Feb 13 15:11:24.687626 kubelet[2624]: E0213 15:11:24.687614 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-r8prd_calico-apiserver(3354d09c-c5d1-4b08-92f8-0175175a9438)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podUID="3354d09c-c5d1-4b08-92f8-0175175a9438" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.346 [INFO][4727] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.349 [INFO][4727] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" iface="eth0" netns="/var/run/netns/cni-98f4e8a9-21f8-5061-046e-4e3cd8a3e043" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.351 [INFO][4727] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" iface="eth0" netns="/var/run/netns/cni-98f4e8a9-21f8-5061-046e-4e3cd8a3e043" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.353 [INFO][4727] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" iface="eth0" netns="/var/run/netns/cni-98f4e8a9-21f8-5061-046e-4e3cd8a3e043" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.353 [INFO][4727] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.354 [INFO][4727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.637 [INFO][4834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" HandleID="k8s-pod-network.6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Workload="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.637 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.680 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.691 [WARNING][4834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" HandleID="k8s-pod-network.6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Workload="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.691 [INFO][4834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" HandleID="k8s-pod-network.6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Workload="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.692 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:24.697599 containerd[1475]: 2025-02-13 15:11:24.695 [INFO][4727] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44" Feb 13 15:11:24.700922 containerd[1475]: time="2025-02-13T15:11:24.700769080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.701081 kubelet[2624]: E0213 15:11:24.701055 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.701219 kubelet[2624]: E0213 15:11:24.701115 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:24.701219 kubelet[2624]: E0213 15:11:24.701137 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" Feb 13 15:11:24.701219 kubelet[2624]: E0213 15:11:24.701189 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bd4f4757-bzs76_calico-apiserver(669d688c-25ab-473d-9d28-45c8a124548b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podUID="669d688c-25ab-473d-9d28-45c8a124548b" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.352 [INFO][4763] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.352 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" iface="eth0" netns="/var/run/netns/cni-b9f7224b-dcc8-d5b2-1011-d0b219da6da7" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.353 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" iface="eth0" netns="/var/run/netns/cni-b9f7224b-dcc8-d5b2-1011-d0b219da6da7" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.353 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" iface="eth0" netns="/var/run/netns/cni-b9f7224b-dcc8-d5b2-1011-d0b219da6da7" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.353 [INFO][4763] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.353 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.639 [INFO][4835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" HandleID="k8s-pod-network.13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Workload="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.640 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.692 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.701 [WARNING][4835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" HandleID="k8s-pod-network.13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Workload="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.701 [INFO][4835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" HandleID="k8s-pod-network.13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Workload="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.703 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:24.707060 containerd[1475]: 2025-02-13 15:11:24.705 [INFO][4763] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9" Feb 13 15:11:24.710416 containerd[1475]: time="2025-02-13T15:11:24.710339881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.710579 kubelet[2624]: E0213 15:11:24.710559 2624 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:11:24.710842 kubelet[2624]: E0213 15:11:24.710607 2624 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:24.710842 kubelet[2624]: E0213 15:11:24.710628 2624 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwm8r" Feb 13 15:11:24.710842 kubelet[2624]: E0213 15:11:24.710696 2624 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwm8r_calico-system(a5494d8d-0818-4dbe-926f-03408aa43bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwm8r" podUID="a5494d8d-0818-4dbe-926f-03408aa43bf9" Feb 13 15:11:24.959811 kubelet[2624]: I0213 15:11:24.959477 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc" Feb 13 15:11:24.959811 kubelet[2624]: I0213 15:11:24.959502 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:24.960871 kubelet[2624]: E0213 15:11:24.960246 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:24.962047 containerd[1475]: time="2025-02-13T15:11:24.961999635Z" level=info msg="StopPodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\"" Feb 13 15:11:24.962278 containerd[1475]: time="2025-02-13T15:11:24.962103039Z" level=info msg="TearDown network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" successfully" Feb 13 15:11:24.962278 containerd[1475]: time="2025-02-13T15:11:24.962112920Z" level=info msg="StopPodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" returns successfully" Feb 13 15:11:24.962278 containerd[1475]: time="2025-02-13T15:11:24.962172403Z" level=info msg="StopPodSandbox for \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\"" Feb 13 15:11:24.962337 containerd[1475]: time="2025-02-13T15:11:24.962293928Z" level=info msg="Ensure that sandbox b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc in task-service has been cleanup successfully" Feb 13 15:11:24.962621 containerd[1475]: time="2025-02-13T15:11:24.962530779Z" level=info msg="StopPodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\"" Feb 13 15:11:24.962746 containerd[1475]: time="2025-02-13T15:11:24.962624143Z" level=info msg="TearDown network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" successfully" Feb 13 15:11:24.962746 containerd[1475]: time="2025-02-13T15:11:24.962634064Z" level=info msg="StopPodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" returns successfully" Feb 13 15:11:24.962746 containerd[1475]: time="2025-02-13T15:11:24.962692706Z" level=info msg="StopPodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\"" Feb 13 15:11:24.962746 containerd[1475]: time="2025-02-13T15:11:24.962732628Z" level=info msg="StopPodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\"" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962752509Z" level=info msg="TearDown network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" successfully" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962761630Z" level=info msg="StopPodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" returns successfully" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962809072Z" level=info msg="StopPodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\"" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962868355Z" level=info msg="TearDown network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" successfully" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962877155Z" level=info msg="StopPodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" returns successfully" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962811152Z" level=info msg="TearDown network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" successfully" Feb 13 15:11:24.962971 containerd[1475]: time="2025-02-13T15:11:24.962907636Z" level=info msg="StopPodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" returns successfully" Feb 13 15:11:24.963198 containerd[1475]: time="2025-02-13T15:11:24.963042643Z" level=info msg="TearDown network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\" successfully" Feb 13 15:11:24.963198 containerd[1475]: time="2025-02-13T15:11:24.963062244Z" level=info msg="StopPodSandbox for \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\" returns successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964120452Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964174295Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964195536Z" level=info msg="TearDown network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964206136Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" returns successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964211816Z" level=info msg="StopPodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\"" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964271219Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964182775Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964324542Z" level=info msg="TearDown network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964333782Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" returns successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964338582Z" level=info msg="TearDown network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964278220Z" level=info msg="TearDown network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964366104Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" returns successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964375944Z" level=info msg="StopPodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" returns successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964339062Z" level=info msg="TearDown network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964425106Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" returns successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964367264Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964560793Z" level=info msg="TearDown network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" successfully" Feb 13 15:11:24.964620 containerd[1475]: time="2025-02-13T15:11:24.964574153Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" returns successfully" Feb 13 15:11:24.965416 containerd[1475]: time="2025-02-13T15:11:24.965273665Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:24.965416 containerd[1475]: time="2025-02-13T15:11:24.965362069Z" level=info msg="TearDown network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" successfully" Feb 13 15:11:24.965416 containerd[1475]: time="2025-02-13T15:11:24.965371670Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" returns successfully" Feb 13 15:11:24.965488 containerd[1475]: time="2025-02-13T15:11:24.965426992Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" Feb 13 15:11:24.965686 containerd[1475]: time="2025-02-13T15:11:24.965449714Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:24.965815 containerd[1475]: time="2025-02-13T15:11:24.965785169Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:24.965860 containerd[1475]: time="2025-02-13T15:11:24.965822811Z" level=info msg="TearDown network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" successfully" Feb 13 15:11:24.965860 containerd[1475]: time="2025-02-13T15:11:24.965838931Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" returns successfully" Feb 13 15:11:24.965943 containerd[1475]: time="2025-02-13T15:11:24.965858812Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:24.965943 containerd[1475]: time="2025-02-13T15:11:24.965474155Z" level=info msg="TearDown network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" successfully" Feb 13 15:11:24.965943 containerd[1475]: time="2025-02-13T15:11:24.965879213Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" returns successfully" Feb 13 15:11:24.965943 containerd[1475]: time="2025-02-13T15:11:24.965494876Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:24.965943 containerd[1475]: time="2025-02-13T15:11:24.965869933Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:24.965943 containerd[1475]: time="2025-02-13T15:11:24.965540598Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:24.966102 containerd[1475]: time="2025-02-13T15:11:24.965990538Z" level=info msg="TearDown network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" successfully" Feb 13 15:11:24.966102 containerd[1475]: time="2025-02-13T15:11:24.965998419Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" returns successfully" Feb 13 15:11:24.966102 containerd[1475]: time="2025-02-13T15:11:24.965938936Z" level=info msg="TearDown network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" successfully" Feb 13 15:11:24.966102 containerd[1475]: time="2025-02-13T15:11:24.966029020Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" returns successfully" Feb 13 15:11:24.966102 containerd[1475]: time="2025-02-13T15:11:24.965522957Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:24.966224 containerd[1475]: time="2025-02-13T15:11:24.966108704Z" level=info msg="TearDown network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" successfully" Feb 13 15:11:24.966224 containerd[1475]: time="2025-02-13T15:11:24.966117464Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" returns successfully" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967556011Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967633814Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967674696Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967688217Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967701297Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967732739Z" level=info msg="TearDown network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" successfully" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967753420Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" returns successfully" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967771580Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967828103Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:24.967895 containerd[1475]: time="2025-02-13T15:11:24.967836903Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967901546Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967917827Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967927628Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967681936Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967986870Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967996351Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967998871Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.968027672Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.967632414Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.968236882Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:24.968365 containerd[1475]: time="2025-02-13T15:11:24.968249642Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:24.968561 containerd[1475]: time="2025-02-13T15:11:24.968445852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:11:24.968561 containerd[1475]: time="2025-02-13T15:11:24.968460892Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:24.968561 containerd[1475]: time="2025-02-13T15:11:24.968537416Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:24.968561 containerd[1475]: time="2025-02-13T15:11:24.968545976Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:24.968640 kubelet[2624]: E0213 15:11:24.968398 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:24.968770 containerd[1475]: time="2025-02-13T15:11:24.968693583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:5,}" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.968826949Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.968930554Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.968941594Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.968990357Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.969013918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.969040439Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:24.969838 containerd[1475]: time="2025-02-13T15:11:24.969049239Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:24.970274 kubelet[2624]: E0213 15:11:24.969707 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:24.970361 containerd[1475]: time="2025-02-13T15:11:24.970145090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:5,}" Feb 13 15:11:24.970361 containerd[1475]: time="2025-02-13T15:11:24.970328138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:5,}" Feb 13 15:11:24.979067 containerd[1475]: time="2025-02-13T15:11:24.978982657Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:24.979163 containerd[1475]: time="2025-02-13T15:11:24.979089542Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:24.979163 containerd[1475]: time="2025-02-13T15:11:24.979102142Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:24.979591 containerd[1475]: time="2025-02-13T15:11:24.979366555Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:24.979672 containerd[1475]: time="2025-02-13T15:11:24.979596485Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:24.979672 containerd[1475]: time="2025-02-13T15:11:24.979612006Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:24.980187 containerd[1475]: time="2025-02-13T15:11:24.980101228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:6,}" Feb 13 15:11:25.008588 systemd[1]: run-netns-cni\x2dc0111e20\x2d8bfb\x2dfadf\x2d7f2c\x2d07472a1709d0.mount: Deactivated successfully. Feb 13 15:11:25.008903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a98f9a6306fa9b1ae65b5903c486aa2a744b20f9a13171c9903e09759fe538-shm.mount: Deactivated successfully. Feb 13 15:11:25.009710 systemd[1]: run-netns-cni\x2db9f7224b\x2ddcc8\x2dd5b2\x2d1011\x2dd0b219da6da7.mount: Deactivated successfully. Feb 13 15:11:25.009791 systemd[1]: run-netns-cni\x2d98f4e8a9\x2d21f8\x2d5061\x2d046e\x2d4e3cd8a3e043.mount: Deactivated successfully. Feb 13 15:11:25.009839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13fc5d2eaa99e0ec011e7b6c46180362d6918271753fedc63d8866aa8a777bb9-shm.mount: Deactivated successfully. Feb 13 15:11:25.009889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6819d38ff6e5fc94ca8c076bbc4fd064d21d39ecd59a13d7f73094a72a459b44-shm.mount: Deactivated successfully. Feb 13 15:11:25.009942 systemd[1]: run-netns-cni\x2daf66b2ac\x2d301f\x2d4c67\x2dc5e7\x2dda561eeefa14.mount: Deactivated successfully. Feb 13 15:11:25.196461 systemd-networkd[1388]: calid74078eac52: Link UP Feb 13 15:11:25.196911 systemd-networkd[1388]: calid74078eac52: Gained carrier Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.019 [INFO][4883] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.036 [INFO][4883] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0 calico-apiserver-54bd4f4757- calico-apiserver 3354d09c-c5d1-4b08-92f8-0175175a9438 965 0 2025-02-13 15:11:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bd4f4757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54bd4f4757-r8prd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid74078eac52 [] []}} ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.036 [INFO][4883] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.107 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" HandleID="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Workload="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.126 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" HandleID="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Workload="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54bd4f4757-r8prd", "timestamp":"2025-02-13 15:11:25.107217546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.126 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.127 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.127 [INFO][4917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.128 [INFO][4917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.141 [INFO][4917] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.158 [INFO][4917] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.162 [INFO][4917] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.164 [INFO][4917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.164 [INFO][4917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.167 [INFO][4917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43 Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.172 [INFO][4917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.178 [INFO][4917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.178 [INFO][4917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" host="localhost" Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.178 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:25.250237 containerd[1475]: 2025-02-13 15:11:25.178 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" HandleID="k8s-pod-network.0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Workload="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.250740 containerd[1475]: 2025-02-13 15:11:25.186 [INFO][4883] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0", GenerateName:"calico-apiserver-54bd4f4757-", Namespace:"calico-apiserver", SelfLink:"", UID:"3354d09c-c5d1-4b08-92f8-0175175a9438", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bd4f4757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54bd4f4757-r8prd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid74078eac52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.250740 containerd[1475]: 2025-02-13 15:11:25.186 [INFO][4883] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.250740 containerd[1475]: 2025-02-13 15:11:25.186 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid74078eac52 ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.250740 containerd[1475]: 2025-02-13 15:11:25.199 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.250740 containerd[1475]: 2025-02-13 15:11:25.200 [INFO][4883] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0", GenerateName:"calico-apiserver-54bd4f4757-", Namespace:"calico-apiserver", SelfLink:"", UID:"3354d09c-c5d1-4b08-92f8-0175175a9438", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bd4f4757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43", Pod:"calico-apiserver-54bd4f4757-r8prd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid74078eac52", MAC:"8e:9d:88:11:51:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.250740 containerd[1475]: 2025-02-13 15:11:25.247 [INFO][4883] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-r8prd" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--r8prd-eth0" Feb 13 15:11:25.299397 containerd[1475]: time="2025-02-13T15:11:25.299300983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:25.299515 containerd[1475]: time="2025-02-13T15:11:25.299418388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:25.299515 containerd[1475]: time="2025-02-13T15:11:25.299450790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.299657 containerd[1475]: time="2025-02-13T15:11:25.299618197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.309081 systemd-networkd[1388]: cali0b01218e0db: Link UP Feb 13 15:11:25.309407 systemd-networkd[1388]: cali0b01218e0db: Gained carrier Feb 13 15:11:25.321902 systemd[1]: Started cri-containerd-0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43.scope - libcontainer container 0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43. Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.075 [INFO][4896] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.126 [INFO][4896] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kwm8r-eth0 csi-node-driver- calico-system a5494d8d-0818-4dbe-926f-03408aa43bf9 964 0 2025-02-13 15:11:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kwm8r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b01218e0db [] []}} ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.126 [INFO][4896] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.183 [INFO][4980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" HandleID="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Workload="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.204 [INFO][4980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" HandleID="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Workload="localhost-k8s-csi--node--driver--kwm8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kwm8r", "timestamp":"2025-02-13 15:11:25.183470559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.204 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.204 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.204 [INFO][4980] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.245 [INFO][4980] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.257 [INFO][4980] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.261 [INFO][4980] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.264 [INFO][4980] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.266 [INFO][4980] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.266 [INFO][4980] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.267 [INFO][4980] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29 Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.280 [INFO][4980] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.294 [INFO][4980] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.294 [INFO][4980] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" host="localhost" Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.294 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:25.327707 containerd[1475]: 2025-02-13 15:11:25.294 [INFO][4980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" HandleID="k8s-pod-network.c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Workload="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.328307 containerd[1475]: 2025-02-13 15:11:25.297 [INFO][4896] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kwm8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5494d8d-0818-4dbe-926f-03408aa43bf9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kwm8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b01218e0db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.328307 containerd[1475]: 2025-02-13 15:11:25.300 [INFO][4896] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.328307 containerd[1475]: 2025-02-13 15:11:25.300 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b01218e0db ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.328307 containerd[1475]: 2025-02-13 15:11:25.308 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.328307 containerd[1475]: 2025-02-13 15:11:25.309 [INFO][4896] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kwm8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5494d8d-0818-4dbe-926f-03408aa43bf9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29", Pod:"csi-node-driver-kwm8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b01218e0db", MAC:"ae:2c:6c:bc:22:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.328307 containerd[1475]: 2025-02-13 15:11:25.325 [INFO][4896] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29" Namespace="calico-system" Pod="csi-node-driver-kwm8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwm8r-eth0" Feb 13 15:11:25.339699 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:25.351523 systemd-networkd[1388]: cali1e5022b86f5: Link UP Feb 13 15:11:25.353125 systemd-networkd[1388]: cali1e5022b86f5: Gained carrier Feb 13 15:11:25.357167 containerd[1475]: time="2025-02-13T15:11:25.355908797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:25.357167 containerd[1475]: time="2025-02-13T15:11:25.356504663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:25.357167 containerd[1475]: time="2025-02-13T15:11:25.356527584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.357167 containerd[1475]: time="2025-02-13T15:11:25.356626229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.370705 containerd[1475]: time="2025-02-13T15:11:25.370621615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-r8prd,Uid:3354d09c-c5d1-4b08-92f8-0175175a9438,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43\"" Feb 13 15:11:25.374722 containerd[1475]: time="2025-02-13T15:11:25.374471067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.113 [INFO][4928] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.132 [INFO][4928] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0 calico-apiserver-54bd4f4757- calico-apiserver 669d688c-25ab-473d-9d28-45c8a124548b 962 0 2025-02-13 15:11:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bd4f4757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54bd4f4757-bzs76 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e5022b86f5 [] []}} ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.132 [INFO][4928] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.226 [INFO][4992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" HandleID="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Workload="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.253 [INFO][4992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" HandleID="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Workload="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005b3130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54bd4f4757-bzs76", "timestamp":"2025-02-13 15:11:25.226137788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.253 [INFO][4992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.294 [INFO][4992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.295 [INFO][4992] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.299 [INFO][4992] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.308 [INFO][4992] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.318 [INFO][4992] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.324 [INFO][4992] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.329 [INFO][4992] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.329 [INFO][4992] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.331 [INFO][4992] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551 Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.336 [INFO][4992] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.342 [INFO][4992] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.342 [INFO][4992] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" host="localhost" Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.342 [INFO][4992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:25.378887 containerd[1475]: 2025-02-13 15:11:25.342 [INFO][4992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" HandleID="k8s-pod-network.d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Workload="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.379405 containerd[1475]: 2025-02-13 15:11:25.347 [INFO][4928] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0", GenerateName:"calico-apiserver-54bd4f4757-", Namespace:"calico-apiserver", SelfLink:"", UID:"669d688c-25ab-473d-9d28-45c8a124548b", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bd4f4757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54bd4f4757-bzs76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e5022b86f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.379405 containerd[1475]: 2025-02-13 15:11:25.348 [INFO][4928] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.379405 containerd[1475]: 2025-02-13 15:11:25.348 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e5022b86f5 ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.379405 containerd[1475]: 2025-02-13 15:11:25.350 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.379405 containerd[1475]: 2025-02-13 15:11:25.353 [INFO][4928] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0", GenerateName:"calico-apiserver-54bd4f4757-", Namespace:"calico-apiserver", SelfLink:"", UID:"669d688c-25ab-473d-9d28-45c8a124548b", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bd4f4757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551", Pod:"calico-apiserver-54bd4f4757-bzs76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e5022b86f5", MAC:"4e:17:08:4f:42:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.379405 containerd[1475]: 2025-02-13 15:11:25.374 [INFO][4928] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551" Namespace="calico-apiserver" Pod="calico-apiserver-54bd4f4757-bzs76" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bd4f4757--bzs76-eth0" Feb 13 15:11:25.388879 systemd[1]: Started cri-containerd-c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29.scope - libcontainer container c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29. Feb 13 15:11:25.402271 systemd-networkd[1388]: califd048d5a89e: Link UP Feb 13 15:11:25.402463 systemd-networkd[1388]: califd048d5a89e: Gained carrier Feb 13 15:11:25.405889 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.138 [INFO][4959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.156 [INFO][4959] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0 calico-kube-controllers-c4c875978- calico-system ffa4f9e1-7dbe-408b-9c49-96a8006df152 830 0 2025-02-13 15:11:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c4c875978 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c4c875978-b5v57 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd048d5a89e [] []}} ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.156 [INFO][4959] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.202 [INFO][4993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" HandleID="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Workload="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.253 [INFO][4993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" HandleID="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Workload="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003734f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c4c875978-b5v57", "timestamp":"2025-02-13 15:11:25.20205255 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.253 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.346 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.346 [INFO][4993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.349 [INFO][4993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.358 [INFO][4993] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.368 [INFO][4993] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.371 [INFO][4993] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.376 [INFO][4993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.376 [INFO][4993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.379 [INFO][4993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.384 [INFO][4993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.394 [INFO][4993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.394 [INFO][4993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" host="localhost" Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.394 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:25.416451 containerd[1475]: 2025-02-13 15:11:25.394 [INFO][4993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" HandleID="k8s-pod-network.2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Workload="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.417143 containerd[1475]: 2025-02-13 15:11:25.399 [INFO][4959] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0", GenerateName:"calico-kube-controllers-c4c875978-", Namespace:"calico-system", SelfLink:"", UID:"ffa4f9e1-7dbe-408b-9c49-96a8006df152", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4c875978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c4c875978-b5v57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd048d5a89e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.417143 containerd[1475]: 2025-02-13 15:11:25.399 [INFO][4959] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.417143 containerd[1475]: 2025-02-13 15:11:25.399 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd048d5a89e ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.417143 containerd[1475]: 2025-02-13 15:11:25.401 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.417143 containerd[1475]: 2025-02-13 15:11:25.402 [INFO][4959] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0", GenerateName:"calico-kube-controllers-c4c875978-", Namespace:"calico-system", SelfLink:"", UID:"ffa4f9e1-7dbe-408b-9c49-96a8006df152", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4c875978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f", Pod:"calico-kube-controllers-c4c875978-b5v57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd048d5a89e", MAC:"66:29:e6:af:13:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.417143 containerd[1475]: 2025-02-13 15:11:25.413 [INFO][4959] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f" Namespace="calico-system" Pod="calico-kube-controllers-c4c875978-b5v57" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4c875978--b5v57-eth0" Feb 13 15:11:25.429674 containerd[1475]: time="2025-02-13T15:11:25.428183351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:25.429674 containerd[1475]: time="2025-02-13T15:11:25.428248114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:25.429674 containerd[1475]: time="2025-02-13T15:11:25.428258275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.429674 containerd[1475]: time="2025-02-13T15:11:25.428341318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.430576 containerd[1475]: time="2025-02-13T15:11:25.430530696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwm8r,Uid:a5494d8d-0818-4dbe-926f-03408aa43bf9,Namespace:calico-system,Attempt:5,} returns sandbox id \"c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29\"" Feb 13 15:11:25.443663 containerd[1475]: time="2025-02-13T15:11:25.443521918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:25.443861 containerd[1475]: time="2025-02-13T15:11:25.443605281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:25.444345 containerd[1475]: time="2025-02-13T15:11:25.444293912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.444442 containerd[1475]: time="2025-02-13T15:11:25.444409437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.457139 systemd-networkd[1388]: calia18abdef05b: Link UP Feb 13 15:11:25.457888 systemd-networkd[1388]: calia18abdef05b: Gained carrier Feb 13 15:11:25.458865 systemd[1]: Started cri-containerd-d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551.scope - libcontainer container d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551. Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.115 [INFO][4899] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.157 [INFO][4899] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--497kt-eth0 coredns-76f75df574- kube-system b1324758-fb3a-44a6-944b-64a2fbd93ce8 963 0 2025-02-13 15:10:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-497kt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia18abdef05b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.158 [INFO][4899] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.226 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" HandleID="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Workload="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.255 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" HandleID="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Workload="localhost-k8s-coredns--76f75df574--497kt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000262aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-497kt", "timestamp":"2025-02-13 15:11:25.226335797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.256 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.394 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.394 [INFO][5005] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.397 [INFO][5005] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.406 [INFO][5005] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.418 [INFO][5005] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.420 [INFO][5005] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.424 [INFO][5005] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.424 [INFO][5005] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.430 [INFO][5005] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.436 [INFO][5005] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.445 [INFO][5005] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.445 [INFO][5005] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" host="localhost" Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.446 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:25.477501 containerd[1475]: 2025-02-13 15:11:25.446 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" HandleID="k8s-pod-network.23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Workload="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.478119 containerd[1475]: 2025-02-13 15:11:25.450 [INFO][4899] cni-plugin/k8s.go 386: Populated endpoint ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--497kt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b1324758-fb3a-44a6-944b-64a2fbd93ce8", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-497kt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia18abdef05b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.478119 containerd[1475]: 2025-02-13 15:11:25.452 [INFO][4899] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.478119 containerd[1475]: 2025-02-13 15:11:25.452 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia18abdef05b ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.478119 containerd[1475]: 2025-02-13 15:11:25.457 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.478119 containerd[1475]: 2025-02-13 15:11:25.457 [INFO][4899] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--497kt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b1324758-fb3a-44a6-944b-64a2fbd93ce8", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba", Pod:"coredns-76f75df574-497kt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia18abdef05b", MAC:"b6:e0:f7:c6:ae:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.478119 containerd[1475]: 2025-02-13 15:11:25.473 [INFO][4899] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba" Namespace="kube-system" Pod="coredns-76f75df574-497kt" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--497kt-eth0" Feb 13 15:11:25.486962 systemd[1]: Started cri-containerd-2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f.scope - libcontainer container 2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f. Feb 13 15:11:25.489220 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:25.497868 systemd-networkd[1388]: califeb28fbcdb3: Link UP Feb 13 15:11:25.498722 systemd-networkd[1388]: califeb28fbcdb3: Gained carrier Feb 13 15:11:25.509848 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.111 [INFO][4897] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.132 [INFO][4897] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--72d96-eth0 coredns-76f75df574- kube-system 9a547d44-0314-4436-850e-6c8fdf4e6cfd 961 0 2025-02-13 15:10:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-72d96 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califeb28fbcdb3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.132 [INFO][4897] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.216 [INFO][4986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" HandleID="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Workload="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.256 [INFO][4986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" HandleID="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Workload="localhost-k8s-coredns--76f75df574--72d96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000305510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-72d96", "timestamp":"2025-02-13 15:11:25.216833612 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.257 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.445 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.445 [INFO][4986] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.449 [INFO][4986] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.459 [INFO][4986] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.466 [INFO][4986] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.468 [INFO][4986] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.470 [INFO][4986] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.470 [INFO][4986] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.474 [INFO][4986] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.482 [INFO][4986] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.490 [INFO][4986] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.490 [INFO][4986] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" host="localhost" Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.490 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:11:25.515542 containerd[1475]: 2025-02-13 15:11:25.490 [INFO][4986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" HandleID="k8s-pod-network.62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Workload="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.516111 containerd[1475]: 2025-02-13 15:11:25.493 [INFO][4897] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--72d96-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a547d44-0314-4436-850e-6c8fdf4e6cfd", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-72d96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califeb28fbcdb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.516111 containerd[1475]: 2025-02-13 15:11:25.493 [INFO][4897] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.516111 containerd[1475]: 2025-02-13 15:11:25.493 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califeb28fbcdb3 ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.516111 containerd[1475]: 2025-02-13 15:11:25.499 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.516111 containerd[1475]: 2025-02-13 15:11:25.501 [INFO][4897] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--72d96-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9a547d44-0314-4436-850e-6c8fdf4e6cfd", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c", Pod:"coredns-76f75df574-72d96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califeb28fbcdb3", MAC:"8a:f5:9c:1c:cf:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:11:25.516111 containerd[1475]: 2025-02-13 15:11:25.511 [INFO][4897] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c" Namespace="kube-system" Pod="coredns-76f75df574-72d96" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--72d96-eth0" Feb 13 15:11:25.517775 containerd[1475]: time="2025-02-13T15:11:25.517606634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:25.518020 containerd[1475]: time="2025-02-13T15:11:25.517969810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:25.518157 containerd[1475]: time="2025-02-13T15:11:25.518118216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.519203 containerd[1475]: time="2025-02-13T15:11:25.518827728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.521877 containerd[1475]: time="2025-02-13T15:11:25.521778100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bd4f4757-bzs76,Uid:669d688c-25ab-473d-9d28-45c8a124548b,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551\"" Feb 13 15:11:25.544814 containerd[1475]: time="2025-02-13T15:11:25.544712287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:25.544927 containerd[1475]: time="2025-02-13T15:11:25.544861373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:25.544977 containerd[1475]: time="2025-02-13T15:11:25.544913816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.545161 containerd[1475]: time="2025-02-13T15:11:25.545056262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:25.546893 systemd[1]: Started cri-containerd-23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba.scope - libcontainer container 23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba. Feb 13 15:11:25.548478 containerd[1475]: time="2025-02-13T15:11:25.548446814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4c875978-b5v57,Uid:ffa4f9e1-7dbe-408b-9c49-96a8006df152,Namespace:calico-system,Attempt:6,} returns sandbox id \"2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f\"" Feb 13 15:11:25.561932 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:25.570942 systemd[1]: Started cri-containerd-62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c.scope - libcontainer container 62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c. Feb 13 15:11:25.582340 containerd[1475]: time="2025-02-13T15:11:25.582304689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-497kt,Uid:b1324758-fb3a-44a6-944b-64a2fbd93ce8,Namespace:kube-system,Attempt:5,} returns sandbox id \"23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba\"" Feb 13 15:11:25.583298 kubelet[2624]: E0213 15:11:25.583277 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:25.586898 containerd[1475]: time="2025-02-13T15:11:25.586844852Z" level=info msg="CreateContainer within sandbox \"23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:11:25.588135 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:25.610040 containerd[1475]: time="2025-02-13T15:11:25.609935286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-72d96,Uid:9a547d44-0314-4436-850e-6c8fdf4e6cfd,Namespace:kube-system,Attempt:5,} returns sandbox id \"62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c\"" Feb 13 15:11:25.610705 kubelet[2624]: E0213 15:11:25.610687 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:25.612600 containerd[1475]: time="2025-02-13T15:11:25.612478680Z" level=info msg="CreateContainer within sandbox \"62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:11:25.643889 containerd[1475]: time="2025-02-13T15:11:25.643846364Z" level=info msg="CreateContainer within sandbox \"62a92cc9cfe5c8eabeb9060f993b7da5ba7ec2fafa1fd1f2ef8b3a992e9ce17c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd077a566eaf071b8e78662a7d7e241b83dfb19eb2097d291cd264c61202f43f\"" Feb 13 15:11:25.644668 containerd[1475]: time="2025-02-13T15:11:25.644532714Z" level=info msg="StartContainer for \"dd077a566eaf071b8e78662a7d7e241b83dfb19eb2097d291cd264c61202f43f\"" Feb 13 15:11:25.645231 containerd[1475]: time="2025-02-13T15:11:25.645184103Z" level=info msg="CreateContainer within sandbox \"23fa4b225061c29f3f98c362430153e2c5773c1f5c3072e451d80532627ecbba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39245bfa6cd8fe911773014375eb0083e917fed8eba9fa7dc29225f28192d889\"" Feb 13 15:11:25.645862 containerd[1475]: time="2025-02-13T15:11:25.645684926Z" level=info msg="StartContainer for \"39245bfa6cd8fe911773014375eb0083e917fed8eba9fa7dc29225f28192d889\"" Feb 13 15:11:25.672872 systemd[1]: Started cri-containerd-39245bfa6cd8fe911773014375eb0083e917fed8eba9fa7dc29225f28192d889.scope - libcontainer container 39245bfa6cd8fe911773014375eb0083e917fed8eba9fa7dc29225f28192d889. Feb 13 15:11:25.675494 systemd[1]: Started cri-containerd-dd077a566eaf071b8e78662a7d7e241b83dfb19eb2097d291cd264c61202f43f.scope - libcontainer container dd077a566eaf071b8e78662a7d7e241b83dfb19eb2097d291cd264c61202f43f. Feb 13 15:11:25.724185 containerd[1475]: time="2025-02-13T15:11:25.724127397Z" level=info msg="StartContainer for \"39245bfa6cd8fe911773014375eb0083e917fed8eba9fa7dc29225f28192d889\" returns successfully" Feb 13 15:11:25.724294 containerd[1475]: time="2025-02-13T15:11:25.724131597Z" level=info msg="StartContainer for \"dd077a566eaf071b8e78662a7d7e241b83dfb19eb2097d291cd264c61202f43f\" returns successfully" Feb 13 15:11:25.970762 kubelet[2624]: E0213 15:11:25.970527 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:26.013422 kernel: bpftool[5541]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:11:26.039287 kubelet[2624]: I0213 15:11:26.039242 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-72d96" podStartSLOduration=27.039165089 podStartE2EDuration="27.039165089s" podCreationTimestamp="2025-02-13 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:26.002095516 +0000 UTC m=+39.385495267" watchObservedRunningTime="2025-02-13 15:11:26.039165089 +0000 UTC m=+39.422564760" Feb 13 15:11:26.040097 kubelet[2624]: I0213 15:11:26.040050 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-497kt" podStartSLOduration=27.040015006 podStartE2EDuration="27.040015006s" podCreationTimestamp="2025-02-13 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:26.039022243 +0000 UTC m=+39.422421954" watchObservedRunningTime="2025-02-13 15:11:26.040015006 +0000 UTC m=+39.423414677" Feb 13 15:11:26.050363 kubelet[2624]: E0213 15:11:26.050036 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:26.208971 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:42684.service - OpenSSH per-connection server daemon (10.0.0.1:42684). Feb 13 15:11:26.229940 systemd-networkd[1388]: vxlan.calico: Link UP Feb 13 15:11:26.229948 systemd-networkd[1388]: vxlan.calico: Gained carrier Feb 13 15:11:26.293692 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 42684 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:26.294949 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:26.299718 systemd-logind[1454]: New session 10 of user core. Feb 13 15:11:26.302891 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:11:26.491542 sshd[5587]: Connection closed by 10.0.0.1 port 42684 Feb 13 15:11:26.491983 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:26.504262 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:42684.service: Deactivated successfully. Feb 13 15:11:26.507420 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:11:26.509779 systemd-networkd[1388]: calid74078eac52: Gained IPv6LL Feb 13 15:11:26.511466 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:11:26.524195 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:42686.service - OpenSSH per-connection server daemon (10.0.0.1:42686). Feb 13 15:11:26.525474 systemd-logind[1454]: Removed session 10. Feb 13 15:11:26.562472 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 42686 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:26.563931 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:26.568495 systemd-logind[1454]: New session 11 of user core. Feb 13 15:11:26.577859 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:11:26.764761 systemd-networkd[1388]: cali0b01218e0db: Gained IPv6LL Feb 13 15:11:26.765724 systemd-networkd[1388]: cali1e5022b86f5: Gained IPv6LL Feb 13 15:11:26.781441 sshd[5639]: Connection closed by 10.0.0.1 port 42686 Feb 13 15:11:26.782455 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:26.799579 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:42686.service: Deactivated successfully. Feb 13 15:11:26.804521 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:11:26.810982 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:11:26.818264 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:42694.service - OpenSSH per-connection server daemon (10.0.0.1:42694). Feb 13 15:11:26.820941 systemd-logind[1454]: Removed session 11. Feb 13 15:11:26.867954 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 42694 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:26.869827 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:26.874734 systemd-logind[1454]: New session 12 of user core. Feb 13 15:11:26.879854 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:11:27.021290 kubelet[2624]: E0213 15:11:27.021159 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:27.022078 kubelet[2624]: E0213 15:11:27.021489 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:27.026324 sshd[5655]: Connection closed by 10.0.0.1 port 42694 Feb 13 15:11:27.025402 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:27.030583 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:42694.service: Deactivated successfully. Feb 13 15:11:27.032667 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:11:27.033319 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:11:27.035196 systemd-logind[1454]: Removed session 12. Feb 13 15:11:27.045182 kubelet[2624]: I0213 15:11:27.045127 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:27.047068 kubelet[2624]: E0213 15:11:27.047043 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:27.276781 systemd-networkd[1388]: califeb28fbcdb3: Gained IPv6LL Feb 13 15:11:27.341785 systemd-networkd[1388]: calia18abdef05b: Gained IPv6LL Feb 13 15:11:27.404776 systemd-networkd[1388]: califd048d5a89e: Gained IPv6LL Feb 13 15:11:27.788814 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Feb 13 15:11:27.852588 containerd[1475]: time="2025-02-13T15:11:27.852535200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:27.854831 containerd[1475]: time="2025-02-13T15:11:27.854781295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:11:27.854930 containerd[1475]: time="2025-02-13T15:11:27.854881619Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:27.857743 containerd[1475]: time="2025-02-13T15:11:27.857707299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:27.864312 containerd[1475]: time="2025-02-13T15:11:27.864265577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.489750107s" Feb 13 15:11:27.864312 containerd[1475]: time="2025-02-13T15:11:27.864307938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:11:27.865660 containerd[1475]: time="2025-02-13T15:11:27.865598073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:11:27.867028 containerd[1475]: time="2025-02-13T15:11:27.866994012Z" level=info msg="CreateContainer within sandbox \"0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:11:27.881119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013035619.mount: Deactivated successfully. Feb 13 15:11:27.890398 containerd[1475]: time="2025-02-13T15:11:27.890159154Z" level=info msg="CreateContainer within sandbox \"0f7582ea037a85bd171b103f515cbd71e0850bb34a2c9f0c7ae42bb32d593f43\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5722d218fcbd1202e6c90bed52a8e2371dfa7c365f44dc9f62aee8ef9b5034d1\"" Feb 13 15:11:27.890982 containerd[1475]: time="2025-02-13T15:11:27.890783500Z" level=info msg="StartContainer for \"5722d218fcbd1202e6c90bed52a8e2371dfa7c365f44dc9f62aee8ef9b5034d1\"" Feb 13 15:11:27.946867 systemd[1]: Started cri-containerd-5722d218fcbd1202e6c90bed52a8e2371dfa7c365f44dc9f62aee8ef9b5034d1.scope - libcontainer container 5722d218fcbd1202e6c90bed52a8e2371dfa7c365f44dc9f62aee8ef9b5034d1. Feb 13 15:11:27.989815 containerd[1475]: time="2025-02-13T15:11:27.989768575Z" level=info msg="StartContainer for \"5722d218fcbd1202e6c90bed52a8e2371dfa7c365f44dc9f62aee8ef9b5034d1\" returns successfully" Feb 13 15:11:28.025949 kubelet[2624]: E0213 15:11:28.025898 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:11:29.027369 kubelet[2624]: I0213 15:11:29.027324 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:29.482406 containerd[1475]: time="2025-02-13T15:11:29.482361416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:29.483876 containerd[1475]: time="2025-02-13T15:11:29.483831316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:11:29.488177 containerd[1475]: time="2025-02-13T15:11:29.488125569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.622495094s" Feb 13 15:11:29.488177 containerd[1475]: time="2025-02-13T15:11:29.488161290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:11:29.490246 containerd[1475]: time="2025-02-13T15:11:29.490206812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:11:29.491485 containerd[1475]: time="2025-02-13T15:11:29.491374499Z" level=info msg="CreateContainer within sandbox \"c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:11:29.496671 containerd[1475]: time="2025-02-13T15:11:29.493765516Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:29.496671 containerd[1475]: time="2025-02-13T15:11:29.494913682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:29.512714 containerd[1475]: time="2025-02-13T15:11:29.512661397Z" level=info msg="CreateContainer within sandbox \"c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"95ee723fd36636d6c432ae84268e729a09e886c12bce3b226d3302a57cb59056\"" Feb 13 15:11:29.513886 containerd[1475]: time="2025-02-13T15:11:29.513851805Z" level=info msg="StartContainer for \"95ee723fd36636d6c432ae84268e729a09e886c12bce3b226d3302a57cb59056\"" Feb 13 15:11:29.549830 systemd[1]: Started cri-containerd-95ee723fd36636d6c432ae84268e729a09e886c12bce3b226d3302a57cb59056.scope - libcontainer container 95ee723fd36636d6c432ae84268e729a09e886c12bce3b226d3302a57cb59056. Feb 13 15:11:29.577298 containerd[1475]: time="2025-02-13T15:11:29.577259919Z" level=info msg="StartContainer for \"95ee723fd36636d6c432ae84268e729a09e886c12bce3b226d3302a57cb59056\" returns successfully" Feb 13 15:11:30.028042 containerd[1475]: time="2025-02-13T15:11:30.027987809Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:30.028854 containerd[1475]: time="2025-02-13T15:11:30.028534071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:11:30.031817 containerd[1475]: time="2025-02-13T15:11:30.031783119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 541.542785ms" Feb 13 15:11:30.031817 containerd[1475]: time="2025-02-13T15:11:30.031818760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:11:30.032394 containerd[1475]: time="2025-02-13T15:11:30.032132012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:11:30.033363 containerd[1475]: time="2025-02-13T15:11:30.033317419Z" level=info msg="CreateContainer within sandbox \"d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:11:30.047749 containerd[1475]: time="2025-02-13T15:11:30.047698185Z" level=info msg="CreateContainer within sandbox \"d6e8b82b4a4f6a34d33295756872b19bc5d97dae0786d19329d93187c8a69551\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e81ec95e05819bbe50bc7ae92786f5505ca9af9f9caa3c9caa50a080079e128\"" Feb 13 15:11:30.048349 containerd[1475]: time="2025-02-13T15:11:30.048313489Z" level=info msg="StartContainer for \"9e81ec95e05819bbe50bc7ae92786f5505ca9af9f9caa3c9caa50a080079e128\"" Feb 13 15:11:30.078850 systemd[1]: Started cri-containerd-9e81ec95e05819bbe50bc7ae92786f5505ca9af9f9caa3c9caa50a080079e128.scope - libcontainer container 9e81ec95e05819bbe50bc7ae92786f5505ca9af9f9caa3c9caa50a080079e128. Feb 13 15:11:30.111348 containerd[1475]: time="2025-02-13T15:11:30.111185922Z" level=info msg="StartContainer for \"9e81ec95e05819bbe50bc7ae92786f5505ca9af9f9caa3c9caa50a080079e128\" returns successfully" Feb 13 15:11:31.051518 kubelet[2624]: I0213 15:11:31.051476 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bd4f4757-r8prd" podStartSLOduration=22.560587345 podStartE2EDuration="25.05143346s" podCreationTimestamp="2025-02-13 15:11:06 +0000 UTC" firstStartedPulling="2025-02-13 15:11:25.373899962 +0000 UTC m=+38.757299633" lastFinishedPulling="2025-02-13 15:11:27.864746077 +0000 UTC m=+41.248145748" observedRunningTime="2025-02-13 15:11:28.069843253 +0000 UTC m=+41.453242924" watchObservedRunningTime="2025-02-13 15:11:31.05143346 +0000 UTC m=+44.434833131" Feb 13 15:11:31.053039 kubelet[2624]: I0213 15:11:31.052777 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bd4f4757-bzs76" podStartSLOduration=20.546641429 podStartE2EDuration="25.052743271s" podCreationTimestamp="2025-02-13 15:11:06 +0000 UTC" firstStartedPulling="2025-02-13 15:11:25.525919486 +0000 UTC m=+38.909319157" lastFinishedPulling="2025-02-13 15:11:30.032021328 +0000 UTC m=+43.415420999" observedRunningTime="2025-02-13 15:11:31.050604428 +0000 UTC m=+44.434004139" watchObservedRunningTime="2025-02-13 15:11:31.052743271 +0000 UTC m=+44.436142942" Feb 13 15:11:31.208813 kubelet[2624]: I0213 15:11:31.208641 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:31.970626 containerd[1475]: time="2025-02-13T15:11:31.970567275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:31.971181 containerd[1475]: time="2025-02-13T15:11:31.971141097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:11:31.972014 containerd[1475]: time="2025-02-13T15:11:31.971988250Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:31.982426 containerd[1475]: time="2025-02-13T15:11:31.982380249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:31.983612 containerd[1475]: time="2025-02-13T15:11:31.983169320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.951008906s" Feb 13 15:11:31.983612 containerd[1475]: time="2025-02-13T15:11:31.983202281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:11:31.984106 containerd[1475]: time="2025-02-13T15:11:31.984083035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:11:31.997303 containerd[1475]: time="2025-02-13T15:11:31.997016452Z" level=info msg="CreateContainer within sandbox \"2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:11:32.009206 containerd[1475]: time="2025-02-13T15:11:32.009155392Z" level=info msg="CreateContainer within sandbox \"2470232d0ef8466accebb79e9a5ca2c92698eee6b9cb50cef48be9f7e2ef172f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8a151db243ccb34c95091e8b4fa5a2f40a5267927337cecf0f627b7f042a661c\"" Feb 13 15:11:32.011279 containerd[1475]: time="2025-02-13T15:11:32.010289954Z" level=info msg="StartContainer for \"8a151db243ccb34c95091e8b4fa5a2f40a5267927337cecf0f627b7f042a661c\"" Feb 13 15:11:32.040527 systemd[1]: Started cri-containerd-8a151db243ccb34c95091e8b4fa5a2f40a5267927337cecf0f627b7f042a661c.scope - libcontainer container 8a151db243ccb34c95091e8b4fa5a2f40a5267927337cecf0f627b7f042a661c. Feb 13 15:11:32.042161 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:42710.service - OpenSSH per-connection server daemon (10.0.0.1:42710). Feb 13 15:11:32.049977 kubelet[2624]: I0213 15:11:32.049930 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:11:32.103631 containerd[1475]: time="2025-02-13T15:11:32.103586063Z" level=info msg="StartContainer for \"8a151db243ccb34c95091e8b4fa5a2f40a5267927337cecf0f627b7f042a661c\" returns successfully" Feb 13 15:11:32.129612 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 42710 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:32.132095 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:32.138004 systemd-logind[1454]: New session 13 of user core. Feb 13 15:11:32.143836 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:11:32.390616 sshd[5904]: Connection closed by 10.0.0.1 port 42710 Feb 13 15:11:32.392111 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:32.395456 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:42710.service: Deactivated successfully. Feb 13 15:11:32.398434 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:11:32.399201 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:11:32.400100 systemd-logind[1454]: Removed session 13. Feb 13 15:11:33.083218 kubelet[2624]: I0213 15:11:33.083179 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c4c875978-b5v57" podStartSLOduration=19.650544431 podStartE2EDuration="26.083136839s" podCreationTimestamp="2025-02-13 15:11:07 +0000 UTC" firstStartedPulling="2025-02-13 15:11:25.550903924 +0000 UTC m=+38.934303555" lastFinishedPulling="2025-02-13 15:11:31.983496332 +0000 UTC m=+45.366895963" observedRunningTime="2025-02-13 15:11:33.082507216 +0000 UTC m=+46.465906927" watchObservedRunningTime="2025-02-13 15:11:33.083136839 +0000 UTC m=+46.466536510" Feb 13 15:11:33.410212 containerd[1475]: time="2025-02-13T15:11:33.410162602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:33.410705 containerd[1475]: time="2025-02-13T15:11:33.410662301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:11:33.411522 containerd[1475]: time="2025-02-13T15:11:33.411482651Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:33.413376 containerd[1475]: time="2025-02-13T15:11:33.413340040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:11:33.414049 containerd[1475]: time="2025-02-13T15:11:33.414005944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.429891348s" Feb 13 15:11:33.414084 containerd[1475]: time="2025-02-13T15:11:33.414050226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:11:33.415790 containerd[1475]: time="2025-02-13T15:11:33.415760889Z" level=info msg="CreateContainer within sandbox \"c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:11:33.427269 containerd[1475]: time="2025-02-13T15:11:33.427220791Z" level=info msg="CreateContainer within sandbox \"c9047e337df481c5a99ba7ed6f1104232dc78891b75467031dd8791ee0bdfc29\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"24f01eb8ebefcbece5d29ac66dbbd86196449a395752f1f95d236ad7b24ed338\"" Feb 13 15:11:33.428126 containerd[1475]: time="2025-02-13T15:11:33.428060022Z" level=info msg="StartContainer for \"24f01eb8ebefcbece5d29ac66dbbd86196449a395752f1f95d236ad7b24ed338\"" Feb 13 15:11:33.452867 systemd[1]: Started cri-containerd-24f01eb8ebefcbece5d29ac66dbbd86196449a395752f1f95d236ad7b24ed338.scope - libcontainer container 24f01eb8ebefcbece5d29ac66dbbd86196449a395752f1f95d236ad7b24ed338. Feb 13 15:11:33.492112 containerd[1475]: time="2025-02-13T15:11:33.492059059Z" level=info msg="StartContainer for \"24f01eb8ebefcbece5d29ac66dbbd86196449a395752f1f95d236ad7b24ed338\" returns successfully" Feb 13 15:11:33.799562 kubelet[2624]: I0213 15:11:33.799445 2624 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:11:33.801355 kubelet[2624]: I0213 15:11:33.801331 2624 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:11:34.101489 kubelet[2624]: I0213 15:11:34.101069 2624 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-kwm8r" podStartSLOduration=19.119254801 podStartE2EDuration="27.101019571s" podCreationTimestamp="2025-02-13 15:11:07 +0000 UTC" firstStartedPulling="2025-02-13 15:11:25.432498464 +0000 UTC m=+38.815898095" lastFinishedPulling="2025-02-13 15:11:33.414263194 +0000 UTC m=+46.797662865" observedRunningTime="2025-02-13 15:11:34.100811724 +0000 UTC m=+47.484211395" watchObservedRunningTime="2025-02-13 15:11:34.101019571 +0000 UTC m=+47.484419242" Feb 13 15:11:37.402407 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:33078.service - OpenSSH per-connection server daemon (10.0.0.1:33078). Feb 13 15:11:37.473729 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 33078 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:37.475250 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:37.482497 systemd-logind[1454]: New session 14 of user core. Feb 13 15:11:37.496934 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:11:37.678966 sshd[5998]: Connection closed by 10.0.0.1 port 33078 Feb 13 15:11:37.679544 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:37.694315 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:33078.service: Deactivated successfully. Feb 13 15:11:37.695970 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:11:37.697815 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:11:37.708563 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:33084.service - OpenSSH per-connection server daemon (10.0.0.1:33084). Feb 13 15:11:37.709948 systemd-logind[1454]: Removed session 14. Feb 13 15:11:37.749082 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 33084 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:37.751704 sshd-session[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:37.755941 systemd-logind[1454]: New session 15 of user core. Feb 13 15:11:37.765838 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:11:37.992071 sshd[6013]: Connection closed by 10.0.0.1 port 33084 Feb 13 15:11:37.992861 sshd-session[6011]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:37.999078 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:33084.service: Deactivated successfully. Feb 13 15:11:38.001591 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:11:38.003243 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:11:38.005220 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:33090.service - OpenSSH per-connection server daemon (10.0.0.1:33090). Feb 13 15:11:38.006613 systemd-logind[1454]: Removed session 15. Feb 13 15:11:38.053219 sshd[6023]: Accepted publickey for core from 10.0.0.1 port 33090 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:38.055241 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:38.060115 systemd-logind[1454]: New session 16 of user core. Feb 13 15:11:38.066823 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:11:39.714341 sshd[6025]: Connection closed by 10.0.0.1 port 33090 Feb 13 15:11:39.715419 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:39.724799 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:33090.service: Deactivated successfully. Feb 13 15:11:39.727509 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:11:39.729871 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:11:39.734551 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:33102.service - OpenSSH per-connection server daemon (10.0.0.1:33102). Feb 13 15:11:39.735958 systemd-logind[1454]: Removed session 16. Feb 13 15:11:39.804753 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 33102 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:39.806804 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:39.811165 systemd-logind[1454]: New session 17 of user core. Feb 13 15:11:39.819845 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:11:40.182763 sshd[6046]: Connection closed by 10.0.0.1 port 33102 Feb 13 15:11:40.182793 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:40.203379 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:33110.service - OpenSSH per-connection server daemon (10.0.0.1:33110). Feb 13 15:11:40.203904 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:33102.service: Deactivated successfully. Feb 13 15:11:40.205957 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:11:40.207698 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:11:40.208950 systemd-logind[1454]: Removed session 17. Feb 13 15:11:40.243275 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 33110 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:40.244942 sshd-session[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:40.248678 systemd-logind[1454]: New session 18 of user core. Feb 13 15:11:40.257849 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:11:40.404901 sshd[6058]: Connection closed by 10.0.0.1 port 33110 Feb 13 15:11:40.405284 sshd-session[6054]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:40.409352 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:33110.service: Deactivated successfully. Feb 13 15:11:40.411295 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:11:40.412092 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:11:40.413236 systemd-logind[1454]: Removed session 18. Feb 13 15:11:45.416492 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:60536.service - OpenSSH per-connection server daemon (10.0.0.1:60536). Feb 13 15:11:45.459012 sshd[6075]: Accepted publickey for core from 10.0.0.1 port 60536 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:45.460548 sshd-session[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:45.465513 systemd-logind[1454]: New session 19 of user core. Feb 13 15:11:45.476858 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:11:45.615168 sshd[6077]: Connection closed by 10.0.0.1 port 60536 Feb 13 15:11:45.615548 sshd-session[6075]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:45.620873 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:11:45.621154 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:60536.service: Deactivated successfully. Feb 13 15:11:45.624748 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:11:45.625537 systemd-logind[1454]: Removed session 19. Feb 13 15:11:46.694549 containerd[1475]: time="2025-02-13T15:11:46.694506444Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:46.695318 containerd[1475]: time="2025-02-13T15:11:46.695200785Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:46.695318 containerd[1475]: time="2025-02-13T15:11:46.695225825Z" level=info msg="StopPodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:46.695877 containerd[1475]: time="2025-02-13T15:11:46.695849604Z" level=info msg="RemovePodSandbox for \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:46.695931 containerd[1475]: time="2025-02-13T15:11:46.695884965Z" level=info msg="Forcibly stopping sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\"" Feb 13 15:11:46.695969 containerd[1475]: time="2025-02-13T15:11:46.695960648Z" level=info msg="TearDown network for sandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" successfully" Feb 13 15:11:46.715082 containerd[1475]: time="2025-02-13T15:11:46.715030423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.715571 containerd[1475]: time="2025-02-13T15:11:46.715113225Z" level=info msg="RemovePodSandbox \"ea1a61807a92807611c0a556543198f2f07c05b3f0f212f6a340599fcb12117c\" returns successfully" Feb 13 15:11:46.715667 containerd[1475]: time="2025-02-13T15:11:46.715629001Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:46.715786 containerd[1475]: time="2025-02-13T15:11:46.715762525Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:46.715786 containerd[1475]: time="2025-02-13T15:11:46.715777845Z" level=info msg="StopPodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:46.716616 containerd[1475]: time="2025-02-13T15:11:46.716593230Z" level=info msg="RemovePodSandbox for \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:46.716616 containerd[1475]: time="2025-02-13T15:11:46.716618231Z" level=info msg="Forcibly stopping sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\"" Feb 13 15:11:46.716957 containerd[1475]: time="2025-02-13T15:11:46.716691353Z" level=info msg="TearDown network for sandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" successfully" Feb 13 15:11:46.722256 containerd[1475]: time="2025-02-13T15:11:46.722203719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.722373 containerd[1475]: time="2025-02-13T15:11:46.722283842Z" level=info msg="RemovePodSandbox \"b2592899f76ea16b7c973506682f64b21808af1d448a210339a71fa0ee198079\" returns successfully" Feb 13 15:11:46.722769 containerd[1475]: time="2025-02-13T15:11:46.722741615Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:46.722863 containerd[1475]: time="2025-02-13T15:11:46.722846339Z" level=info msg="TearDown network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" successfully" Feb 13 15:11:46.722920 containerd[1475]: time="2025-02-13T15:11:46.722863179Z" level=info msg="StopPodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" returns successfully" Feb 13 15:11:46.723427 containerd[1475]: time="2025-02-13T15:11:46.723405196Z" level=info msg="RemovePodSandbox for \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:46.723468 containerd[1475]: time="2025-02-13T15:11:46.723434156Z" level=info msg="Forcibly stopping sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\"" Feb 13 15:11:46.723515 containerd[1475]: time="2025-02-13T15:11:46.723500998Z" level=info msg="TearDown network for sandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" successfully" Feb 13 15:11:46.726032 containerd[1475]: time="2025-02-13T15:11:46.725986153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.726090 containerd[1475]: time="2025-02-13T15:11:46.726050955Z" level=info msg="RemovePodSandbox \"24ee851086f8a33f8be922985b84d36fbff07cfda4bc86a21708febb4afde788\" returns successfully" Feb 13 15:11:46.726466 containerd[1475]: time="2025-02-13T15:11:46.726441727Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" Feb 13 15:11:46.726547 containerd[1475]: time="2025-02-13T15:11:46.726532730Z" level=info msg="TearDown network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" successfully" Feb 13 15:11:46.726572 containerd[1475]: time="2025-02-13T15:11:46.726546650Z" level=info msg="StopPodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" returns successfully" Feb 13 15:11:46.727028 containerd[1475]: time="2025-02-13T15:11:46.726997064Z" level=info msg="RemovePodSandbox for \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" Feb 13 15:11:46.727065 containerd[1475]: time="2025-02-13T15:11:46.727032465Z" level=info msg="Forcibly stopping sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\"" Feb 13 15:11:46.727110 containerd[1475]: time="2025-02-13T15:11:46.727095987Z" level=info msg="TearDown network for sandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" successfully" Feb 13 15:11:46.730159 containerd[1475]: time="2025-02-13T15:11:46.730115718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.730228 containerd[1475]: time="2025-02-13T15:11:46.730180680Z" level=info msg="RemovePodSandbox \"24aea013328915918a1de8a0fed0fab3ecf45619d5b985914e4d6b165126859f\" returns successfully" Feb 13 15:11:46.730533 containerd[1475]: time="2025-02-13T15:11:46.730509050Z" level=info msg="StopPodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\"" Feb 13 15:11:46.730617 containerd[1475]: time="2025-02-13T15:11:46.730603453Z" level=info msg="TearDown network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" successfully" Feb 13 15:11:46.730659 containerd[1475]: time="2025-02-13T15:11:46.730617093Z" level=info msg="StopPodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" returns successfully" Feb 13 15:11:46.731476 containerd[1475]: time="2025-02-13T15:11:46.731453038Z" level=info msg="RemovePodSandbox for \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\"" Feb 13 15:11:46.731530 containerd[1475]: time="2025-02-13T15:11:46.731482879Z" level=info msg="Forcibly stopping sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\"" Feb 13 15:11:46.731560 containerd[1475]: time="2025-02-13T15:11:46.731548001Z" level=info msg="TearDown network for sandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" successfully" Feb 13 15:11:46.735498 containerd[1475]: time="2025-02-13T15:11:46.735430718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.735498 containerd[1475]: time="2025-02-13T15:11:46.735495080Z" level=info msg="RemovePodSandbox \"ea437e7a59f4c57ce21e0fb7e7a43033a4556ffdcf1badcd280f97706d47298e\" returns successfully" Feb 13 15:11:46.735915 containerd[1475]: time="2025-02-13T15:11:46.735888412Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:46.735997 containerd[1475]: time="2025-02-13T15:11:46.735982135Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:46.735997 containerd[1475]: time="2025-02-13T15:11:46.735996135Z" level=info msg="StopPodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:46.736442 containerd[1475]: time="2025-02-13T15:11:46.736416268Z" level=info msg="RemovePodSandbox for \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:46.737692 containerd[1475]: time="2025-02-13T15:11:46.736530391Z" level=info msg="Forcibly stopping sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\"" Feb 13 15:11:46.737692 containerd[1475]: time="2025-02-13T15:11:46.736606794Z" level=info msg="TearDown network for sandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" successfully" Feb 13 15:11:46.741579 containerd[1475]: time="2025-02-13T15:11:46.741538502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.741748 containerd[1475]: time="2025-02-13T15:11:46.741727988Z" level=info msg="RemovePodSandbox \"49bd09c8df3d56fdd76454cba0f7137c5133bcf1c2b57d44652a4a8bf1b4d8ab\" returns successfully" Feb 13 15:11:46.742362 containerd[1475]: time="2025-02-13T15:11:46.742318486Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:46.742439 containerd[1475]: time="2025-02-13T15:11:46.742427449Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:46.742469 containerd[1475]: time="2025-02-13T15:11:46.742439850Z" level=info msg="StopPodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:46.742724 containerd[1475]: time="2025-02-13T15:11:46.742700098Z" level=info msg="RemovePodSandbox for \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:46.742752 containerd[1475]: time="2025-02-13T15:11:46.742730298Z" level=info msg="Forcibly stopping sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\"" Feb 13 15:11:46.742805 containerd[1475]: time="2025-02-13T15:11:46.742790900Z" level=info msg="TearDown network for sandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" successfully" Feb 13 15:11:46.747466 containerd[1475]: time="2025-02-13T15:11:46.747419680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.747542 containerd[1475]: time="2025-02-13T15:11:46.747493522Z" level=info msg="RemovePodSandbox \"f6ba4b3e7b7c5217743d89c7d2fedb0f1bb626694ba9abe0e104268d1ae45534\" returns successfully" Feb 13 15:11:46.747917 containerd[1475]: time="2025-02-13T15:11:46.747892374Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:46.752720 containerd[1475]: time="2025-02-13T15:11:46.752670958Z" level=info msg="TearDown network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" successfully" Feb 13 15:11:46.752720 containerd[1475]: time="2025-02-13T15:11:46.752705799Z" level=info msg="StopPodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" returns successfully" Feb 13 15:11:46.753131 containerd[1475]: time="2025-02-13T15:11:46.753105571Z" level=info msg="RemovePodSandbox for \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:46.753191 containerd[1475]: time="2025-02-13T15:11:46.753136452Z" level=info msg="Forcibly stopping sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\"" Feb 13 15:11:46.753221 containerd[1475]: time="2025-02-13T15:11:46.753203494Z" level=info msg="TearDown network for sandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" successfully" Feb 13 15:11:46.757825 containerd[1475]: time="2025-02-13T15:11:46.757752192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.760656 containerd[1475]: time="2025-02-13T15:11:46.757939437Z" level=info msg="RemovePodSandbox \"41054470c5e025b0aea2ea864093a41fc28d0d480b12152120ef1427a2fcec2b\" returns successfully" Feb 13 15:11:46.760738 containerd[1475]: time="2025-02-13T15:11:46.760703721Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" Feb 13 15:11:46.760820 containerd[1475]: time="2025-02-13T15:11:46.760797803Z" level=info msg="TearDown network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" successfully" Feb 13 15:11:46.760820 containerd[1475]: time="2025-02-13T15:11:46.760810524Z" level=info msg="StopPodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" returns successfully" Feb 13 15:11:46.762105 containerd[1475]: time="2025-02-13T15:11:46.762069402Z" level=info msg="RemovePodSandbox for \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" Feb 13 15:11:46.762105 containerd[1475]: time="2025-02-13T15:11:46.762108163Z" level=info msg="Forcibly stopping sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\"" Feb 13 15:11:46.762291 containerd[1475]: time="2025-02-13T15:11:46.762257287Z" level=info msg="TearDown network for sandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" successfully" Feb 13 15:11:46.765200 containerd[1475]: time="2025-02-13T15:11:46.765157535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.765275 containerd[1475]: time="2025-02-13T15:11:46.765218737Z" level=info msg="RemovePodSandbox \"d0dda61a75df105ea392fdd6e6c4bb64a2ff05e456fb9d29f95d774a7c18111b\" returns successfully" Feb 13 15:11:46.765584 containerd[1475]: time="2025-02-13T15:11:46.765485985Z" level=info msg="StopPodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\"" Feb 13 15:11:46.765865 containerd[1475]: time="2025-02-13T15:11:46.765839796Z" level=info msg="TearDown network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" successfully" Feb 13 15:11:46.765865 containerd[1475]: time="2025-02-13T15:11:46.765865196Z" level=info msg="StopPodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" returns successfully" Feb 13 15:11:46.766410 containerd[1475]: time="2025-02-13T15:11:46.766105724Z" level=info msg="RemovePodSandbox for \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\"" Feb 13 15:11:46.766410 containerd[1475]: time="2025-02-13T15:11:46.766135044Z" level=info msg="Forcibly stopping sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\"" Feb 13 15:11:46.766410 containerd[1475]: time="2025-02-13T15:11:46.766245248Z" level=info msg="TearDown network for sandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" successfully" Feb 13 15:11:46.769423 containerd[1475]: time="2025-02-13T15:11:46.769390223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.769711 containerd[1475]: time="2025-02-13T15:11:46.769544267Z" level=info msg="RemovePodSandbox \"d161a9a0eba2748999de33527b214ffd6175ece525b0eb08cc72d9088a69707e\" returns successfully" Feb 13 15:11:46.770666 containerd[1475]: time="2025-02-13T15:11:46.770551618Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:46.771064 containerd[1475]: time="2025-02-13T15:11:46.770742383Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:46.771064 containerd[1475]: time="2025-02-13T15:11:46.771055033Z" level=info msg="StopPodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:46.771675 containerd[1475]: time="2025-02-13T15:11:46.771468645Z" level=info msg="RemovePodSandbox for \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:46.771675 containerd[1475]: time="2025-02-13T15:11:46.771497286Z" level=info msg="Forcibly stopping sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\"" Feb 13 15:11:46.771675 containerd[1475]: time="2025-02-13T15:11:46.771559768Z" level=info msg="TearDown network for sandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" successfully" Feb 13 15:11:46.774868 containerd[1475]: time="2025-02-13T15:11:46.774830867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.774949 containerd[1475]: time="2025-02-13T15:11:46.774900389Z" level=info msg="RemovePodSandbox \"46e10e613f5954cfc5fedfd2b9bbb18cfcd3d4ae2fb4b4189b38be14eb7b4226\" returns successfully" Feb 13 15:11:46.775549 containerd[1475]: time="2025-02-13T15:11:46.775291361Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:46.775549 containerd[1475]: time="2025-02-13T15:11:46.775385203Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:46.775549 containerd[1475]: time="2025-02-13T15:11:46.775404084Z" level=info msg="StopPodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:46.776510 containerd[1475]: time="2025-02-13T15:11:46.776005182Z" level=info msg="RemovePodSandbox for \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:46.776510 containerd[1475]: time="2025-02-13T15:11:46.776030463Z" level=info msg="Forcibly stopping sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\"" Feb 13 15:11:46.776510 containerd[1475]: time="2025-02-13T15:11:46.776095065Z" level=info msg="TearDown network for sandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" successfully" Feb 13 15:11:46.779785 containerd[1475]: time="2025-02-13T15:11:46.779738135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.779867 containerd[1475]: time="2025-02-13T15:11:46.779806017Z" level=info msg="RemovePodSandbox \"cd0b13673a3bb4070e5802026991102c070a2ac1914338a4fdedf59649029ac9\" returns successfully" Feb 13 15:11:46.780274 containerd[1475]: time="2025-02-13T15:11:46.780231110Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:46.780344 containerd[1475]: time="2025-02-13T15:11:46.780325152Z" level=info msg="TearDown network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" successfully" Feb 13 15:11:46.780344 containerd[1475]: time="2025-02-13T15:11:46.780341393Z" level=info msg="StopPodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" returns successfully" Feb 13 15:11:46.780665 containerd[1475]: time="2025-02-13T15:11:46.780634322Z" level=info msg="RemovePodSandbox for \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:46.780704 containerd[1475]: time="2025-02-13T15:11:46.780670283Z" level=info msg="Forcibly stopping sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\"" Feb 13 15:11:46.780765 containerd[1475]: time="2025-02-13T15:11:46.780726725Z" level=info msg="TearDown network for sandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" successfully" Feb 13 15:11:46.782890 containerd[1475]: time="2025-02-13T15:11:46.782855709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.783165 containerd[1475]: time="2025-02-13T15:11:46.782907510Z" level=info msg="RemovePodSandbox \"b6ec1450cc91f3c3fd43d2bf3830e54b581f5d8e7c22d0d8482ea429e59b9487\" returns successfully" Feb 13 15:11:46.783272 containerd[1475]: time="2025-02-13T15:11:46.783231800Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" Feb 13 15:11:46.783364 containerd[1475]: time="2025-02-13T15:11:46.783343804Z" level=info msg="TearDown network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" successfully" Feb 13 15:11:46.783402 containerd[1475]: time="2025-02-13T15:11:46.783361164Z" level=info msg="StopPodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" returns successfully" Feb 13 15:11:46.783731 containerd[1475]: time="2025-02-13T15:11:46.783707695Z" level=info msg="RemovePodSandbox for \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" Feb 13 15:11:46.783834 containerd[1475]: time="2025-02-13T15:11:46.783819258Z" level=info msg="Forcibly stopping sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\"" Feb 13 15:11:46.793913 containerd[1475]: time="2025-02-13T15:11:46.793403067Z" level=info msg="TearDown network for sandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" successfully" Feb 13 15:11:46.800181 containerd[1475]: time="2025-02-13T15:11:46.800141670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.800590 containerd[1475]: time="2025-02-13T15:11:46.800561763Z" level=info msg="RemovePodSandbox \"0904191500da4f18c2a1f832292642f753d32ba19c12318bad911e39b4f1897d\" returns successfully" Feb 13 15:11:46.801315 containerd[1475]: time="2025-02-13T15:11:46.801285465Z" level=info msg="StopPodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\"" Feb 13 15:11:46.801399 containerd[1475]: time="2025-02-13T15:11:46.801380108Z" level=info msg="TearDown network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" successfully" Feb 13 15:11:46.801399 containerd[1475]: time="2025-02-13T15:11:46.801394828Z" level=info msg="StopPodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" returns successfully" Feb 13 15:11:46.802207 containerd[1475]: time="2025-02-13T15:11:46.802108210Z" level=info msg="RemovePodSandbox for \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\"" Feb 13 15:11:46.802266 containerd[1475]: time="2025-02-13T15:11:46.802211373Z" level=info msg="Forcibly stopping sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\"" Feb 13 15:11:46.802689 containerd[1475]: time="2025-02-13T15:11:46.802291055Z" level=info msg="TearDown network for sandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" successfully" Feb 13 15:11:46.806198 containerd[1475]: time="2025-02-13T15:11:46.806148851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.806270 containerd[1475]: time="2025-02-13T15:11:46.806209413Z" level=info msg="RemovePodSandbox \"71c8c5802838b0e1436652ced0d3c4ceb8d81c8433457d9f3865f1f82e5586f8\" returns successfully" Feb 13 15:11:46.806767 containerd[1475]: time="2025-02-13T15:11:46.806573224Z" level=info msg="StopPodSandbox for \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\"" Feb 13 15:11:46.806767 containerd[1475]: time="2025-02-13T15:11:46.806698548Z" level=info msg="TearDown network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\" successfully" Feb 13 15:11:46.806767 containerd[1475]: time="2025-02-13T15:11:46.806712388Z" level=info msg="StopPodSandbox for \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\" returns successfully" Feb 13 15:11:46.807008 containerd[1475]: time="2025-02-13T15:11:46.806975676Z" level=info msg="RemovePodSandbox for \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\"" Feb 13 15:11:46.808038 containerd[1475]: time="2025-02-13T15:11:46.807082080Z" level=info msg="Forcibly stopping sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\"" Feb 13 15:11:46.808038 containerd[1475]: time="2025-02-13T15:11:46.807143401Z" level=info msg="TearDown network for sandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\" successfully" Feb 13 15:11:46.809928 containerd[1475]: time="2025-02-13T15:11:46.809895644Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.810092 containerd[1475]: time="2025-02-13T15:11:46.810071690Z" level=info msg="RemovePodSandbox \"b068403b6d2423f8f8fca3fae3972942267f92fbd0bf66417a8034cab1e8f4cc\" returns successfully" Feb 13 15:11:46.810447 containerd[1475]: time="2025-02-13T15:11:46.810425820Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:46.810641 containerd[1475]: time="2025-02-13T15:11:46.810623426Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:46.810736 containerd[1475]: time="2025-02-13T15:11:46.810719549Z" level=info msg="StopPodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:46.811148 containerd[1475]: time="2025-02-13T15:11:46.811118321Z" level=info msg="RemovePodSandbox for \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:46.811193 containerd[1475]: time="2025-02-13T15:11:46.811152522Z" level=info msg="Forcibly stopping sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\"" Feb 13 15:11:46.811257 containerd[1475]: time="2025-02-13T15:11:46.811234885Z" level=info msg="TearDown network for sandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" successfully" Feb 13 15:11:46.813980 containerd[1475]: time="2025-02-13T15:11:46.813940486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.814046 containerd[1475]: time="2025-02-13T15:11:46.813995248Z" level=info msg="RemovePodSandbox \"273ccde7a85f474e4ccc02ca048d22424a909ddb8f4d227007a9341ae8f3c8d5\" returns successfully" Feb 13 15:11:46.814496 containerd[1475]: time="2025-02-13T15:11:46.814344059Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:46.814496 containerd[1475]: time="2025-02-13T15:11:46.814435941Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:46.814496 containerd[1475]: time="2025-02-13T15:11:46.814447062Z" level=info msg="StopPodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:46.815017 containerd[1475]: time="2025-02-13T15:11:46.814978518Z" level=info msg="RemovePodSandbox for \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:46.815017 containerd[1475]: time="2025-02-13T15:11:46.815011199Z" level=info msg="Forcibly stopping sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\"" Feb 13 15:11:46.815091 containerd[1475]: time="2025-02-13T15:11:46.815076841Z" level=info msg="TearDown network for sandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" successfully" Feb 13 15:11:46.817676 containerd[1475]: time="2025-02-13T15:11:46.817627718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.817876 containerd[1475]: time="2025-02-13T15:11:46.817784202Z" level=info msg="RemovePodSandbox \"e1d57034f003e2dfab4f7bed930e0ffacf4eef7bb769062bfbebd4e8ed2f48c4\" returns successfully" Feb 13 15:11:46.818335 containerd[1475]: time="2025-02-13T15:11:46.818174414Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:46.818335 containerd[1475]: time="2025-02-13T15:11:46.818272217Z" level=info msg="TearDown network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" successfully" Feb 13 15:11:46.818335 containerd[1475]: time="2025-02-13T15:11:46.818282457Z" level=info msg="StopPodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" returns successfully" Feb 13 15:11:46.818563 containerd[1475]: time="2025-02-13T15:11:46.818539345Z" level=info msg="RemovePodSandbox for \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:46.818805 containerd[1475]: time="2025-02-13T15:11:46.818669749Z" level=info msg="Forcibly stopping sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\"" Feb 13 15:11:46.818805 containerd[1475]: time="2025-02-13T15:11:46.818759832Z" level=info msg="TearDown network for sandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" successfully" Feb 13 15:11:46.821320 containerd[1475]: time="2025-02-13T15:11:46.821284668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.821507 containerd[1475]: time="2025-02-13T15:11:46.821420512Z" level=info msg="RemovePodSandbox \"1d6b0087e0f4ef3ec2a8dc2aa11db9715946f3671b3788fae775cff712b661c2\" returns successfully" Feb 13 15:11:46.821787 containerd[1475]: time="2025-02-13T15:11:46.821759362Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" Feb 13 15:11:46.821870 containerd[1475]: time="2025-02-13T15:11:46.821854525Z" level=info msg="TearDown network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" successfully" Feb 13 15:11:46.821907 containerd[1475]: time="2025-02-13T15:11:46.821868046Z" level=info msg="StopPodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" returns successfully" Feb 13 15:11:46.822691 containerd[1475]: time="2025-02-13T15:11:46.822088612Z" level=info msg="RemovePodSandbox for \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" Feb 13 15:11:46.822691 containerd[1475]: time="2025-02-13T15:11:46.822115533Z" level=info msg="Forcibly stopping sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\"" Feb 13 15:11:46.822691 containerd[1475]: time="2025-02-13T15:11:46.822178375Z" level=info msg="TearDown network for sandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" successfully" Feb 13 15:11:46.828972 containerd[1475]: time="2025-02-13T15:11:46.828928059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.829017 containerd[1475]: time="2025-02-13T15:11:46.828993221Z" level=info msg="RemovePodSandbox \"9c029f215e52f16cb128c8ac90cdde8ab0f51ce40d2e9e487cebc5c6ac40fdce\" returns successfully" Feb 13 15:11:46.829547 containerd[1475]: time="2025-02-13T15:11:46.829486755Z" level=info msg="StopPodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\"" Feb 13 15:11:46.829611 containerd[1475]: time="2025-02-13T15:11:46.829601279Z" level=info msg="TearDown network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" successfully" Feb 13 15:11:46.829634 containerd[1475]: time="2025-02-13T15:11:46.829612519Z" level=info msg="StopPodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" returns successfully" Feb 13 15:11:46.829964 containerd[1475]: time="2025-02-13T15:11:46.829925969Z" level=info msg="RemovePodSandbox for \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\"" Feb 13 15:11:46.829998 containerd[1475]: time="2025-02-13T15:11:46.829962850Z" level=info msg="Forcibly stopping sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\"" Feb 13 15:11:46.830041 containerd[1475]: time="2025-02-13T15:11:46.830027012Z" level=info msg="TearDown network for sandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" successfully" Feb 13 15:11:46.832431 containerd[1475]: time="2025-02-13T15:11:46.832393803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.832468 containerd[1475]: time="2025-02-13T15:11:46.832448125Z" level=info msg="RemovePodSandbox \"70ea2d04ebe307ae7ada237ab9c617dee10c325f07a679b5c71127a221a88485\" returns successfully" Feb 13 15:11:46.832822 containerd[1475]: time="2025-02-13T15:11:46.832790455Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:46.832912 containerd[1475]: time="2025-02-13T15:11:46.832889018Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:46.832912 containerd[1475]: time="2025-02-13T15:11:46.832907019Z" level=info msg="StopPodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:46.833360 containerd[1475]: time="2025-02-13T15:11:46.833314631Z" level=info msg="RemovePodSandbox for \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:46.833425 containerd[1475]: time="2025-02-13T15:11:46.833347752Z" level=info msg="Forcibly stopping sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\"" Feb 13 15:11:46.833488 containerd[1475]: time="2025-02-13T15:11:46.833469996Z" level=info msg="TearDown network for sandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" successfully" Feb 13 15:11:46.838179 containerd[1475]: time="2025-02-13T15:11:46.837677403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.838179 containerd[1475]: time="2025-02-13T15:11:46.837748565Z" level=info msg="RemovePodSandbox \"94fb4183cfa9b90a5cc34ac536012cfeb3bef3ff65d914155cb187ecd2128578\" returns successfully" Feb 13 15:11:46.839114 containerd[1475]: time="2025-02-13T15:11:46.838358383Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:46.839293 containerd[1475]: time="2025-02-13T15:11:46.839249730Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:46.839293 containerd[1475]: time="2025-02-13T15:11:46.839282651Z" level=info msg="StopPodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:46.839679 containerd[1475]: time="2025-02-13T15:11:46.839630901Z" level=info msg="RemovePodSandbox for \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:46.839709 containerd[1475]: time="2025-02-13T15:11:46.839677583Z" level=info msg="Forcibly stopping sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\"" Feb 13 15:11:46.839765 containerd[1475]: time="2025-02-13T15:11:46.839750105Z" level=info msg="TearDown network for sandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" successfully" Feb 13 15:11:46.842107 containerd[1475]: time="2025-02-13T15:11:46.842061135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.842136 containerd[1475]: time="2025-02-13T15:11:46.842122697Z" level=info msg="RemovePodSandbox \"f80188015fe2e23a67ff9153aad484036c4bbd38a1980159c36ba5cfe19c7a6d\" returns successfully" Feb 13 15:11:46.842544 containerd[1475]: time="2025-02-13T15:11:46.842519309Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:46.842628 containerd[1475]: time="2025-02-13T15:11:46.842609431Z" level=info msg="TearDown network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" successfully" Feb 13 15:11:46.842628 containerd[1475]: time="2025-02-13T15:11:46.842624992Z" level=info msg="StopPodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" returns successfully" Feb 13 15:11:46.842937 containerd[1475]: time="2025-02-13T15:11:46.842900240Z" level=info msg="RemovePodSandbox for \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:46.842937 containerd[1475]: time="2025-02-13T15:11:46.842926721Z" level=info msg="Forcibly stopping sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\"" Feb 13 15:11:46.843036 containerd[1475]: time="2025-02-13T15:11:46.842983923Z" level=info msg="TearDown network for sandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" successfully" Feb 13 15:11:46.845348 containerd[1475]: time="2025-02-13T15:11:46.845316913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.845414 containerd[1475]: time="2025-02-13T15:11:46.845375955Z" level=info msg="RemovePodSandbox \"222d9596a316da58fb29f8544caefe48cdba993c765a41b28d1783b263cf6757\" returns successfully" Feb 13 15:11:46.845723 containerd[1475]: time="2025-02-13T15:11:46.845695004Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" Feb 13 15:11:46.845798 containerd[1475]: time="2025-02-13T15:11:46.845783527Z" level=info msg="TearDown network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" successfully" Feb 13 15:11:46.845798 containerd[1475]: time="2025-02-13T15:11:46.845795527Z" level=info msg="StopPodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" returns successfully" Feb 13 15:11:46.846184 containerd[1475]: time="2025-02-13T15:11:46.846134538Z" level=info msg="RemovePodSandbox for \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" Feb 13 15:11:46.846184 containerd[1475]: time="2025-02-13T15:11:46.846160058Z" level=info msg="Forcibly stopping sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\"" Feb 13 15:11:46.846283 containerd[1475]: time="2025-02-13T15:11:46.846223500Z" level=info msg="TearDown network for sandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" successfully" Feb 13 15:11:46.848420 containerd[1475]: time="2025-02-13T15:11:46.848376885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.848482 containerd[1475]: time="2025-02-13T15:11:46.848431887Z" level=info msg="RemovePodSandbox \"02620e8c1ba7eba913ac5536eb3f1e9f349da905ba9160bcddbeed48c0980408\" returns successfully" Feb 13 15:11:46.848750 containerd[1475]: time="2025-02-13T15:11:46.848722496Z" level=info msg="StopPodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\"" Feb 13 15:11:46.848817 containerd[1475]: time="2025-02-13T15:11:46.848802378Z" level=info msg="TearDown network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" successfully" Feb 13 15:11:46.848842 containerd[1475]: time="2025-02-13T15:11:46.848816779Z" level=info msg="StopPodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" returns successfully" Feb 13 15:11:46.849088 containerd[1475]: time="2025-02-13T15:11:46.849066946Z" level=info msg="RemovePodSandbox for \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\"" Feb 13 15:11:46.849088 containerd[1475]: time="2025-02-13T15:11:46.849088267Z" level=info msg="Forcibly stopping sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\"" Feb 13 15:11:46.849149 containerd[1475]: time="2025-02-13T15:11:46.849139068Z" level=info msg="TearDown network for sandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" successfully" Feb 13 15:11:46.851758 containerd[1475]: time="2025-02-13T15:11:46.851720346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.851800 containerd[1475]: time="2025-02-13T15:11:46.851774348Z" level=info msg="RemovePodSandbox \"a5af405ee3327376097dbec9f84d4d554110cba52b2f4ac19ea3cbb8af460adb\" returns successfully" Feb 13 15:11:46.852079 containerd[1475]: time="2025-02-13T15:11:46.852054676Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:46.852144 containerd[1475]: time="2025-02-13T15:11:46.852129518Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:46.852144 containerd[1475]: time="2025-02-13T15:11:46.852141839Z" level=info msg="StopPodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:46.852429 containerd[1475]: time="2025-02-13T15:11:46.852387886Z" level=info msg="RemovePodSandbox for \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:46.852429 containerd[1475]: time="2025-02-13T15:11:46.852409807Z" level=info msg="Forcibly stopping sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\"" Feb 13 15:11:46.852507 containerd[1475]: time="2025-02-13T15:11:46.852462089Z" level=info msg="TearDown network for sandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" successfully" Feb 13 15:11:46.854635 containerd[1475]: time="2025-02-13T15:11:46.854590993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.854789 containerd[1475]: time="2025-02-13T15:11:46.854656315Z" level=info msg="RemovePodSandbox \"b6fdb54887c28d56b0ca441b64189e0e8374edbe9f411cbca188a067068faa6b\" returns successfully" Feb 13 15:11:46.855086 containerd[1475]: time="2025-02-13T15:11:46.855051127Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:46.855146 containerd[1475]: time="2025-02-13T15:11:46.855134329Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:46.855146 containerd[1475]: time="2025-02-13T15:11:46.855144049Z" level=info msg="StopPodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:46.855474 containerd[1475]: time="2025-02-13T15:11:46.855431978Z" level=info msg="RemovePodSandbox for \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:46.855474 containerd[1475]: time="2025-02-13T15:11:46.855452699Z" level=info msg="Forcibly stopping sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\"" Feb 13 15:11:46.855551 containerd[1475]: time="2025-02-13T15:11:46.855501740Z" level=info msg="TearDown network for sandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" successfully" Feb 13 15:11:46.858077 containerd[1475]: time="2025-02-13T15:11:46.858036857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.858124 containerd[1475]: time="2025-02-13T15:11:46.858092498Z" level=info msg="RemovePodSandbox \"884aaca10aad4972eca85003b5cabef7954e0a167bccd628bcaba55a563aab72\" returns successfully" Feb 13 15:11:46.858449 containerd[1475]: time="2025-02-13T15:11:46.858422548Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:46.858805 containerd[1475]: time="2025-02-13T15:11:46.858721677Z" level=info msg="TearDown network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" successfully" Feb 13 15:11:46.858805 containerd[1475]: time="2025-02-13T15:11:46.858738998Z" level=info msg="StopPodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" returns successfully" Feb 13 15:11:46.859116 containerd[1475]: time="2025-02-13T15:11:46.859083168Z" level=info msg="RemovePodSandbox for \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:46.859116 containerd[1475]: time="2025-02-13T15:11:46.859111209Z" level=info msg="Forcibly stopping sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\"" Feb 13 15:11:46.859235 containerd[1475]: time="2025-02-13T15:11:46.859199692Z" level=info msg="TearDown network for sandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" successfully" Feb 13 15:11:46.861336 containerd[1475]: time="2025-02-13T15:11:46.861305755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.861396 containerd[1475]: time="2025-02-13T15:11:46.861363677Z" level=info msg="RemovePodSandbox \"08b2235011e90a89f487fd73cefbdcdac875a1ecc450c9394d81ce9c62c31f2b\" returns successfully" Feb 13 15:11:46.861807 containerd[1475]: time="2025-02-13T15:11:46.861665286Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" Feb 13 15:11:46.861807 containerd[1475]: time="2025-02-13T15:11:46.861747569Z" level=info msg="TearDown network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" successfully" Feb 13 15:11:46.861807 containerd[1475]: time="2025-02-13T15:11:46.861758969Z" level=info msg="StopPodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" returns successfully" Feb 13 15:11:46.863283 containerd[1475]: time="2025-02-13T15:11:46.862192622Z" level=info msg="RemovePodSandbox for \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" Feb 13 15:11:46.863283 containerd[1475]: time="2025-02-13T15:11:46.862230063Z" level=info msg="Forcibly stopping sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\"" Feb 13 15:11:46.863283 containerd[1475]: time="2025-02-13T15:11:46.862300385Z" level=info msg="TearDown network for sandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" successfully" Feb 13 15:11:46.864710 containerd[1475]: time="2025-02-13T15:11:46.864679937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.864816 containerd[1475]: time="2025-02-13T15:11:46.864801941Z" level=info msg="RemovePodSandbox \"a4aa1cb676bb764cd97d26e7a5a23d3f91ecd93b77095fdc48f622d152ec57eb\" returns successfully" Feb 13 15:11:46.865197 containerd[1475]: time="2025-02-13T15:11:46.865174912Z" level=info msg="StopPodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\"" Feb 13 15:11:46.865280 containerd[1475]: time="2025-02-13T15:11:46.865264395Z" level=info msg="TearDown network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" successfully" Feb 13 15:11:46.865280 containerd[1475]: time="2025-02-13T15:11:46.865278035Z" level=info msg="StopPodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" returns successfully" Feb 13 15:11:46.865599 containerd[1475]: time="2025-02-13T15:11:46.865569444Z" level=info msg="RemovePodSandbox for \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\"" Feb 13 15:11:46.865665 containerd[1475]: time="2025-02-13T15:11:46.865600965Z" level=info msg="Forcibly stopping sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\"" Feb 13 15:11:46.865691 containerd[1475]: time="2025-02-13T15:11:46.865681727Z" level=info msg="TearDown network for sandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" successfully" Feb 13 15:11:46.868056 containerd[1475]: time="2025-02-13T15:11:46.868017278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:46.868110 containerd[1475]: time="2025-02-13T15:11:46.868069199Z" level=info msg="RemovePodSandbox \"f53ba43035eb500a868d22bf46c50f7e8256e1cac07704c783fb9db9adfcd3d2\" returns successfully" Feb 13 15:11:50.626131 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). Feb 13 15:11:50.666897 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:50.668113 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:50.671931 systemd-logind[1454]: New session 20 of user core. Feb 13 15:11:50.680819 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:11:50.801704 sshd[6121]: Connection closed by 10.0.0.1 port 60538 Feb 13 15:11:50.802209 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:50.805691 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:60538.service: Deactivated successfully. Feb 13 15:11:50.807505 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:11:50.810486 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:11:50.811614 systemd-logind[1454]: Removed session 20. Feb 13 15:11:55.812781 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:57618.service - OpenSSH per-connection server daemon (10.0.0.1:57618). Feb 13 15:11:55.863104 sshd[6138]: Accepted publickey for core from 10.0.0.1 port 57618 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:11:55.864483 sshd-session[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:55.868126 systemd-logind[1454]: New session 21 of user core. Feb 13 15:11:55.874837 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:11:56.006689 sshd[6140]: Connection closed by 10.0.0.1 port 57618 Feb 13 15:11:56.007266 sshd-session[6138]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:56.011064 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:57618.service: Deactivated successfully. Feb 13 15:11:56.013467 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:11:56.014261 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:11:56.015159 systemd-logind[1454]: Removed session 21. Feb 13 15:11:57.110732 kubelet[2624]: E0213 15:11:57.109939 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"