Mar 19 11:32:18.894504 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:32:18.894525 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:32:18.894535 kernel: KASLR enabled Mar 19 11:32:18.894540 kernel: efi: EFI v2.7 by EDK II Mar 19 11:32:18.894546 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 19 11:32:18.894551 kernel: random: crng init done Mar 19 11:32:18.894558 kernel: secureboot: Secure boot disabled Mar 19 11:32:18.894563 kernel: ACPI: Early table checksum verification disabled Mar 19 11:32:18.894569 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 19 11:32:18.894577 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 19 11:32:18.894583 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894589 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894594 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894600 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894607 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894614 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894620 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894626 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894632 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:32:18.894638 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 19 11:32:18.894644 kernel: NUMA: Failed to initialise from firmware Mar 19 11:32:18.894650 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:32:18.894656 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Mar 19 11:32:18.894662 kernel: Zone ranges: Mar 19 11:32:18.894668 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:32:18.894675 kernel: DMA32 empty Mar 19 11:32:18.894681 kernel: Normal empty Mar 19 11:32:18.894687 kernel: Movable zone start for each node Mar 19 11:32:18.894704 kernel: Early memory node ranges Mar 19 11:32:18.894725 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 19 11:32:18.894732 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 19 11:32:18.894738 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 19 11:32:18.894744 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 19 11:32:18.894750 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 19 11:32:18.894756 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 19 11:32:18.894762 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 19 11:32:18.894768 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 19 11:32:18.894776 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 19 11:32:18.894782 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:32:18.894788 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 19 11:32:18.894797 kernel: psci: probing for conduit method from ACPI. Mar 19 11:32:18.894803 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:32:18.894810 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:32:18.894817 kernel: psci: Trusted OS migration not required Mar 19 11:32:18.894824 kernel: psci: SMC Calling Convention v1.1 Mar 19 11:32:18.894830 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 19 11:32:18.894837 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:32:18.894843 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:32:18.894850 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 19 11:32:18.894856 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:32:18.894862 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:32:18.894869 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:32:18.894875 kernel: CPU features: detected: Spectre-v4 Mar 19 11:32:18.894883 kernel: CPU features: detected: Spectre-BHB Mar 19 11:32:18.894889 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:32:18.894896 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:32:18.894902 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:32:18.894908 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:32:18.894915 kernel: alternatives: applying boot alternatives Mar 19 11:32:18.894922 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:32:18.894929 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:32:18.894935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:32:18.894942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:32:18.894948 kernel: Fallback order for Node 0: 0 Mar 19 11:32:18.894956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 19 11:32:18.894963 kernel: Policy zone: DMA Mar 19 11:32:18.894969 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:32:18.894975 kernel: software IO TLB: area num 4. Mar 19 11:32:18.894982 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 19 11:32:18.894989 kernel: Memory: 2387544K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184744K reserved, 0K cma-reserved) Mar 19 11:32:18.894998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 19 11:32:18.895006 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:32:18.895012 kernel: rcu: RCU event tracing is enabled. Mar 19 11:32:18.895019 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 19 11:32:18.895026 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:32:18.895032 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:32:18.895040 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:32:18.895047 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 19 11:32:18.895053 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:32:18.895060 kernel: GICv3: 256 SPIs implemented Mar 19 11:32:18.895066 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:32:18.895073 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:32:18.895079 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:32:18.895085 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 19 11:32:18.895092 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 19 11:32:18.895098 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 19 11:32:18.895105 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 19 11:32:18.895113 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 19 11:32:18.895120 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 19 11:32:18.895126 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:32:18.895133 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:32:18.895140 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:32:18.895146 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:32:18.895153 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:32:18.895159 kernel: arm-pv: using stolen time PV Mar 19 11:32:18.895166 kernel: Console: colour dummy device 80x25 Mar 19 11:32:18.895173 kernel: ACPI: Core revision 20230628 Mar 19 11:32:18.895180 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:32:18.895188 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:32:18.895194 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:32:18.895201 kernel: landlock: Up and running. Mar 19 11:32:18.895213 kernel: SELinux: Initializing. Mar 19 11:32:18.895220 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:32:18.895227 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:32:18.895233 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:32:18.895240 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:32:18.895247 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:32:18.895255 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:32:18.895261 kernel: Platform MSI: ITS@0x8080000 domain created Mar 19 11:32:18.895268 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 19 11:32:18.895274 kernel: Remapping and enabling EFI services. Mar 19 11:32:18.895281 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:32:18.895287 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:32:18.895294 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 19 11:32:18.895301 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 19 11:32:18.895307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:32:18.895315 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:32:18.895322 kernel: Detected PIPT I-cache on CPU2 Mar 19 11:32:18.895334 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 19 11:32:18.895342 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 19 11:32:18.895349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:32:18.895356 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 19 11:32:18.895363 kernel: Detected PIPT I-cache on CPU3 Mar 19 11:32:18.895370 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 19 11:32:18.895377 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 19 11:32:18.895385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:32:18.895392 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 19 11:32:18.895399 kernel: smp: Brought up 1 node, 4 CPUs Mar 19 11:32:18.895406 kernel: SMP: Total of 4 processors activated. Mar 19 11:32:18.895413 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:32:18.895420 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:32:18.895427 kernel: CPU features: detected: Common not Private translations Mar 19 11:32:18.895434 kernel: CPU features: detected: CRC32 instructions Mar 19 11:32:18.895442 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 19 11:32:18.895449 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:32:18.895456 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:32:18.895463 kernel: CPU features: detected: Privileged Access Never Mar 19 11:32:18.895470 kernel: CPU features: detected: RAS Extension Support Mar 19 11:32:18.895477 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 19 11:32:18.895483 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:32:18.895490 kernel: alternatives: applying system-wide alternatives Mar 19 11:32:18.895497 kernel: devtmpfs: initialized Mar 19 11:32:18.895504 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:32:18.895512 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 19 11:32:18.895519 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:32:18.895526 kernel: SMBIOS 3.0.0 present. Mar 19 11:32:18.895533 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 19 11:32:18.895540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:32:18.895547 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:32:18.895554 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:32:18.895561 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:32:18.895568 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:32:18.895577 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 19 11:32:18.895584 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:32:18.895591 kernel: cpuidle: using governor menu Mar 19 11:32:18.895598 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:32:18.895605 kernel: ASID allocator initialised with 32768 entries Mar 19 11:32:18.895611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:32:18.895618 kernel: Serial: AMBA PL011 UART driver Mar 19 11:32:18.895625 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:32:18.895632 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:32:18.895641 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:32:18.895647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:32:18.895654 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:32:18.895661 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:32:18.895668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:32:18.895675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:32:18.895682 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:32:18.895689 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:32:18.895741 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:32:18.895749 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:32:18.895756 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:32:18.895764 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:32:18.895770 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:32:18.895777 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:32:18.895784 kernel: ACPI: Interpreter enabled Mar 19 11:32:18.895791 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:32:18.895798 kernel: ACPI: MCFG table detected, 1 entries Mar 19 11:32:18.895805 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:32:18.895815 kernel: printk: console [ttyAMA0] enabled Mar 19 11:32:18.895822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 11:32:18.895956 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:32:18.896030 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 11:32:18.896097 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 11:32:18.896161 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 19 11:32:18.896231 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 19 11:32:18.896243 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 19 11:32:18.896250 kernel: PCI host bridge to bus 0000:00 Mar 19 11:32:18.896324 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 19 11:32:18.896384 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 19 11:32:18.896443 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 19 11:32:18.896504 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 11:32:18.896583 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 19 11:32:18.896667 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 19 11:32:18.896764 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 19 11:32:18.896834 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 19 11:32:18.896901 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:32:18.896965 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:32:18.897030 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 19 11:32:18.897095 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 19 11:32:18.897158 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 19 11:32:18.897225 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 19 11:32:18.897285 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 19 11:32:18.897294 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 19 11:32:18.897301 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 19 11:32:18.897308 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 19 11:32:18.897315 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 19 11:32:18.897325 kernel: iommu: Default domain type: Translated Mar 19 11:32:18.897332 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:32:18.897339 kernel: efivars: Registered efivars operations Mar 19 11:32:18.897346 kernel: vgaarb: loaded Mar 19 11:32:18.897353 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:32:18.897360 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:32:18.897367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:32:18.897374 kernel: pnp: PnP ACPI init Mar 19 11:32:18.897446 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 19 11:32:18.897458 kernel: pnp: PnP ACPI: found 1 devices Mar 19 11:32:18.897465 kernel: NET: Registered PF_INET protocol family Mar 19 11:32:18.897472 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:32:18.897479 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:32:18.897486 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:32:18.897493 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:32:18.897500 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:32:18.897507 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:32:18.897514 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:32:18.897523 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:32:18.897530 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:32:18.897537 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:32:18.897544 kernel: kvm [1]: HYP mode not available Mar 19 11:32:18.897551 kernel: Initialise system trusted keyrings Mar 19 11:32:18.897558 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:32:18.897565 kernel: Key type asymmetric registered Mar 19 11:32:18.897571 kernel: Asymmetric key parser 'x509' registered Mar 19 11:32:18.897578 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:32:18.897587 kernel: io scheduler mq-deadline registered Mar 19 11:32:18.897594 kernel: io scheduler kyber registered Mar 19 11:32:18.897601 kernel: io scheduler bfq registered Mar 19 11:32:18.897608 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 19 11:32:18.897615 kernel: ACPI: button: Power Button [PWRB] Mar 19 11:32:18.897622 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 19 11:32:18.897688 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 19 11:32:18.897715 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:32:18.897723 kernel: thunder_xcv, ver 1.0 Mar 19 11:32:18.897732 kernel: thunder_bgx, ver 1.0 Mar 19 11:32:18.897740 kernel: nicpf, ver 1.0 Mar 19 11:32:18.897746 kernel: nicvf, ver 1.0 Mar 19 11:32:18.897822 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:32:18.897883 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:32:18 UTC (1742383938) Mar 19 11:32:18.897893 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:32:18.897900 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 19 11:32:18.897907 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:32:18.897916 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:32:18.897924 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:32:18.897931 kernel: Segment Routing with IPv6 Mar 19 11:32:18.897939 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:32:18.897946 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:32:18.897954 kernel: Key type dns_resolver registered Mar 19 11:32:18.897961 kernel: registered taskstats version 1 Mar 19 11:32:18.897969 kernel: Loading compiled-in X.509 certificates Mar 19 11:32:18.897976 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:32:18.897985 kernel: Key type .fscrypt registered Mar 19 11:32:18.897992 kernel: Key type fscrypt-provisioning registered Mar 19 11:32:18.898000 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:32:18.898007 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:32:18.898014 kernel: ima: No architecture policies found Mar 19 11:32:18.898021 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:32:18.898028 kernel: clk: Disabling unused clocks Mar 19 11:32:18.898035 kernel: Freeing unused kernel memory: 38336K Mar 19 11:32:18.898042 kernel: Run /init as init process Mar 19 11:32:18.898051 kernel: with arguments: Mar 19 11:32:18.898057 kernel: /init Mar 19 11:32:18.898064 kernel: with environment: Mar 19 11:32:18.898071 kernel: HOME=/ Mar 19 11:32:18.898078 kernel: TERM=linux Mar 19 11:32:18.898085 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:32:18.898093 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:32:18.898102 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:32:18.898112 systemd[1]: Detected virtualization kvm. Mar 19 11:32:18.898119 systemd[1]: Detected architecture arm64. Mar 19 11:32:18.898127 systemd[1]: Running in initrd. Mar 19 11:32:18.898134 systemd[1]: No hostname configured, using default hostname. Mar 19 11:32:18.898141 systemd[1]: Hostname set to . Mar 19 11:32:18.898149 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:32:18.898156 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:32:18.898163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:32:18.898173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:32:18.898181 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:32:18.898189 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:32:18.898197 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:32:18.898210 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:32:18.898221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:32:18.898246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:32:18.898263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:32:18.898271 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:32:18.898279 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:32:18.898287 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:32:18.898294 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:32:18.898302 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:32:18.898310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:32:18.898318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:32:18.898327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:32:18.898334 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:32:18.898342 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:32:18.898350 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:32:18.898357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:32:18.898365 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:32:18.898372 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:32:18.898380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:32:18.898389 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:32:18.898397 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:32:18.898404 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:32:18.898412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:32:18.898419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:32:18.898426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:32:18.898434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:32:18.898443 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:32:18.898451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:32:18.898459 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:32:18.898466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:32:18.898492 systemd-journald[239]: Collecting audit messages is disabled. Mar 19 11:32:18.898512 systemd-journald[239]: Journal started Mar 19 11:32:18.898531 systemd-journald[239]: Runtime Journal (/run/log/journal/648605f020504c26aec9ae1582925321) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:32:18.889663 systemd-modules-load[240]: Inserted module 'overlay' Mar 19 11:32:18.900480 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:32:18.901773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:32:18.906529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:32:18.907147 systemd-modules-load[240]: Inserted module 'br_netfilter' Mar 19 11:32:18.908588 kernel: Bridge firewalling registered Mar 19 11:32:18.917838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:32:18.919659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:32:18.921688 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:32:18.923690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:32:18.927315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:32:18.930693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:32:18.932312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:32:18.933771 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:32:18.941602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:32:18.944369 dracut-cmdline[274]: dracut-dracut-053 Mar 19 11:32:18.950508 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:32:18.950266 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:32:18.982017 systemd-resolved[287]: Positive Trust Anchors: Mar 19 11:32:18.982036 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:32:18.982067 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:32:18.988289 systemd-resolved[287]: Defaulting to hostname 'linux'. Mar 19 11:32:18.990753 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:32:18.991874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:32:19.022720 kernel: SCSI subsystem initialized Mar 19 11:32:19.026718 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:32:19.034745 kernel: iscsi: registered transport (tcp) Mar 19 11:32:19.047722 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:32:19.047743 kernel: QLogic iSCSI HBA Driver Mar 19 11:32:19.088927 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:32:19.098858 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:32:19.116166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:32:19.117478 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:32:19.117489 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:32:19.164731 kernel: raid6: neonx8 gen() 15745 MB/s Mar 19 11:32:19.181732 kernel: raid6: neonx4 gen() 15752 MB/s Mar 19 11:32:19.198719 kernel: raid6: neonx2 gen() 13195 MB/s Mar 19 11:32:19.215718 kernel: raid6: neonx1 gen() 10478 MB/s Mar 19 11:32:19.232718 kernel: raid6: int64x8 gen() 6755 MB/s Mar 19 11:32:19.249724 kernel: raid6: int64x4 gen() 7331 MB/s Mar 19 11:32:19.266727 kernel: raid6: int64x2 gen() 6087 MB/s Mar 19 11:32:19.283809 kernel: raid6: int64x1 gen() 5031 MB/s Mar 19 11:32:19.283832 kernel: raid6: using algorithm neonx4 gen() 15752 MB/s Mar 19 11:32:19.301876 kernel: raid6: .... xor() 12437 MB/s, rmw enabled Mar 19 11:32:19.301891 kernel: raid6: using neon recovery algorithm Mar 19 11:32:19.307162 kernel: xor: measuring software checksum speed Mar 19 11:32:19.307178 kernel: 8regs : 21579 MB/sec Mar 19 11:32:19.307881 kernel: 32regs : 21167 MB/sec Mar 19 11:32:19.309135 kernel: arm64_neon : 27851 MB/sec Mar 19 11:32:19.309146 kernel: xor: using function: arm64_neon (27851 MB/sec) Mar 19 11:32:19.357726 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:32:19.368262 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:32:19.377867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:32:19.392113 systemd-udevd[464]: Using default interface naming scheme 'v255'. Mar 19 11:32:19.396112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:32:19.408133 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:32:19.418788 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Mar 19 11:32:19.446905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:32:19.454863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:32:19.493297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:32:19.500880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:32:19.511905 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:32:19.513661 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:32:19.515801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:32:19.518156 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:32:19.527851 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:32:19.537935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:32:19.553727 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 19 11:32:19.560621 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 19 11:32:19.560740 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:32:19.560758 kernel: GPT:9289727 != 19775487 Mar 19 11:32:19.560768 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:32:19.560779 kernel: GPT:9289727 != 19775487 Mar 19 11:32:19.560787 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:32:19.560796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:32:19.559381 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:32:19.559494 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:32:19.563689 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:32:19.564998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:32:19.565136 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:32:19.568735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:32:19.579718 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (522) Mar 19 11:32:19.579757 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Mar 19 11:32:19.584241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:32:19.594577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:32:19.607665 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 11:32:19.615094 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 11:32:19.625578 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 11:32:19.626789 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 11:32:19.635910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:32:19.649848 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:32:19.654333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:32:19.656692 disk-uuid[553]: Primary Header is updated. Mar 19 11:32:19.656692 disk-uuid[553]: Secondary Entries is updated. Mar 19 11:32:19.656692 disk-uuid[553]: Secondary Header is updated. Mar 19 11:32:19.662732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:32:19.674245 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:32:20.668958 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:32:20.669646 disk-uuid[555]: The operation has completed successfully. Mar 19 11:32:20.696667 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:32:20.696797 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:32:20.733910 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:32:20.736773 sh[574]: Success Mar 19 11:32:20.753789 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:32:20.784482 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:32:20.793141 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:32:20.795723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:32:20.805739 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:32:20.805787 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:32:20.805797 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:32:20.806795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:32:20.808205 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:32:20.811456 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:32:20.812934 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:32:20.825889 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:32:20.827569 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:32:20.837851 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:32:20.837905 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:32:20.837916 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:32:20.840754 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:32:20.850760 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:32:20.855897 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:32:20.863895 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:32:20.874254 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:32:20.928986 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:32:20.940864 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:32:20.959535 ignition[672]: Ignition 2.20.0 Mar 19 11:32:20.959546 ignition[672]: Stage: fetch-offline Mar 19 11:32:20.959581 ignition[672]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:32:20.959590 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:32:20.959801 ignition[672]: parsed url from cmdline: "" Mar 19 11:32:20.959804 ignition[672]: no config URL provided Mar 19 11:32:20.959809 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:32:20.959816 ignition[672]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:32:20.959840 ignition[672]: op(1): [started] loading QEMU firmware config module Mar 19 11:32:20.959844 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 19 11:32:20.967351 ignition[672]: op(1): [finished] loading QEMU firmware config module Mar 19 11:32:20.969907 systemd-networkd[768]: lo: Link UP Mar 19 11:32:20.969920 systemd-networkd[768]: lo: Gained carrier Mar 19 11:32:20.970762 systemd-networkd[768]: Enumeration completed Mar 19 11:32:20.970890 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:32:20.971167 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:32:20.971170 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:32:20.971874 systemd-networkd[768]: eth0: Link UP Mar 19 11:32:20.971877 systemd-networkd[768]: eth0: Gained carrier Mar 19 11:32:20.971884 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:32:20.972774 systemd[1]: Reached target network.target - Network. Mar 19 11:32:20.992747 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:32:21.016349 ignition[672]: parsing config with SHA512: 1c0a6b175cfe41152353ebece124e35b57656f8f2336082ce6633a5f14747051424a7ef461e99a65fab11319a66f4a2b6a18fb915214ad51d82d7a66b03f2587 Mar 19 11:32:21.022495 unknown[672]: fetched base config from "system" Mar 19 11:32:21.022508 unknown[672]: fetched user config from "qemu" Mar 19 11:32:21.023032 ignition[672]: fetch-offline: fetch-offline passed Mar 19 11:32:21.023132 ignition[672]: Ignition finished successfully Mar 19 11:32:21.025204 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:32:21.027130 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 19 11:32:21.036931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:32:21.050024 ignition[776]: Ignition 2.20.0 Mar 19 11:32:21.050034 ignition[776]: Stage: kargs Mar 19 11:32:21.050206 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:32:21.050216 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:32:21.051084 ignition[776]: kargs: kargs passed Mar 19 11:32:21.054952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:32:21.051128 ignition[776]: Ignition finished successfully Mar 19 11:32:21.066881 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:32:21.076213 ignition[784]: Ignition 2.20.0 Mar 19 11:32:21.076224 ignition[784]: Stage: disks Mar 19 11:32:21.076391 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:32:21.079222 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:32:21.076401 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:32:21.080403 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:32:21.077272 ignition[784]: disks: disks passed Mar 19 11:32:21.081512 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:32:21.077321 ignition[784]: Ignition finished successfully Mar 19 11:32:21.083460 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:32:21.085308 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:32:21.087151 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:32:21.107881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:32:21.127204 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:32:21.130643 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:32:21.145857 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:32:21.190710 kernel: EXT4-fs (vda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:32:21.191053 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:32:21.192345 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:32:21.208822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:32:21.210672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:32:21.212040 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:32:21.216764 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) Mar 19 11:32:21.212085 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:32:21.212110 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:32:21.223589 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:32:21.223613 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:32:21.223631 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:32:21.216724 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:32:21.225419 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:32:21.218428 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:32:21.228959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:32:21.266442 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:32:21.271160 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:32:21.275529 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:32:21.280358 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:32:21.357752 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:32:21.377816 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:32:21.380264 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:32:21.385714 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:32:21.404582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:32:21.408905 ignition[917]: INFO : Ignition 2.20.0 Mar 19 11:32:21.409855 ignition[917]: INFO : Stage: mount Mar 19 11:32:21.410497 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:32:21.410497 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:32:21.412595 ignition[917]: INFO : mount: mount passed Mar 19 11:32:21.412595 ignition[917]: INFO : Ignition finished successfully Mar 19 11:32:21.412204 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:32:21.422811 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:32:21.875062 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:32:21.884892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:32:21.891754 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Mar 19 11:32:21.891802 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:32:21.891814 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:32:21.893298 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:32:21.895707 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:32:21.896577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:32:21.917923 ignition[947]: INFO : Ignition 2.20.0 Mar 19 11:32:21.917923 ignition[947]: INFO : Stage: files Mar 19 11:32:21.919609 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:32:21.919609 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:32:21.919609 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:32:21.923058 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:32:21.923058 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:32:21.926503 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:32:21.927956 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:32:21.927956 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:32:21.927033 unknown[947]: wrote ssh authorized keys file for user: core Mar 19 11:32:21.931739 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:32:21.931739 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 19 11:32:21.974459 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:32:22.883993 systemd-networkd[768]: eth0: Gained IPv6LL Mar 19 11:32:24.062615 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 19 11:32:24.064894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 19 11:32:24.388040 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 19 11:32:24.603387 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 19 11:32:24.603387 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 19 11:32:24.606846 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 19 11:32:24.625100 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:32:24.628050 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:32:24.632604 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 19 11:32:24.632604 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:32:24.632604 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:32:24.632604 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:32:24.632604 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:32:24.632604 ignition[947]: INFO : files: files passed Mar 19 11:32:24.632604 ignition[947]: INFO : Ignition finished successfully Mar 19 11:32:24.630849 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:32:24.637840 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:32:24.639605 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:32:24.642020 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:32:24.648837 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Mar 19 11:32:24.642094 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:32:24.651404 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:32:24.651404 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:32:24.654117 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:32:24.653005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:32:24.655522 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:32:24.667849 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:32:24.683541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:32:24.683631 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:32:24.685714 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:32:24.687498 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:32:24.689209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:32:24.699866 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:32:24.711436 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:32:24.713608 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:32:24.723218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:32:24.724384 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:32:24.726301 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:32:24.727937 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:32:24.728042 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:32:24.730414 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:32:24.732322 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:32:24.733858 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:32:24.735457 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:32:24.737287 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:32:24.739121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:32:24.740838 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:32:24.742703 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:32:24.744570 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:32:24.746208 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:32:24.747657 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:32:24.747780 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:32:24.749975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:32:24.751789 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:32:24.753624 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:32:24.756756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:32:24.757914 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:32:24.758019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:32:24.760760 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:32:24.760871 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:32:24.762754 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:32:24.764278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:32:24.770743 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:32:24.772019 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:32:24.774027 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:32:24.775486 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:32:24.775559 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:32:24.777028 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:32:24.777101 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:32:24.778554 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:32:24.778655 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:32:24.780341 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:32:24.780435 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:32:24.792826 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:32:24.794266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:32:24.795239 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:32:24.795363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:32:24.797083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:32:24.797189 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:32:24.803500 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:32:24.803588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:32:24.806801 ignition[1002]: INFO : Ignition 2.20.0 Mar 19 11:32:24.806801 ignition[1002]: INFO : Stage: umount Mar 19 11:32:24.806801 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:32:24.806801 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:32:24.806801 ignition[1002]: INFO : umount: umount passed Mar 19 11:32:24.806801 ignition[1002]: INFO : Ignition finished successfully Mar 19 11:32:24.808670 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:32:24.809673 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:32:24.809830 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:32:24.811455 systemd[1]: Stopped target network.target - Network. Mar 19 11:32:24.813423 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:32:24.813492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:32:24.815017 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:32:24.815064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:32:24.816641 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:32:24.816684 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:32:24.818866 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:32:24.818912 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:32:24.820826 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:32:24.823121 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:32:24.830379 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:32:24.830472 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:32:24.833663 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:32:24.833856 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:32:24.833937 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:32:24.837294 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:32:24.838215 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:32:24.838272 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:32:24.846784 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:32:24.847607 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:32:24.847667 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:32:24.849604 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:32:24.849653 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:32:24.852497 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:32:24.852540 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:32:24.854446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:32:24.854489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:32:24.857340 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:32:24.860447 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:32:24.860507 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:32:24.866296 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:32:24.866409 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:32:24.868344 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:32:24.868416 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:32:24.870046 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:32:24.870127 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:32:24.871662 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:32:24.871843 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:32:24.873774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:32:24.873840 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:32:24.874993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:32:24.875026 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:32:24.876686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:32:24.876753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:32:24.879532 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:32:24.879576 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:32:24.882225 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:32:24.882267 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:32:24.893810 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:32:24.894782 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:32:24.894834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:32:24.897776 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:32:24.897819 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:32:24.899935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:32:24.899976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:32:24.901914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:32:24.901956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:32:24.905446 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:32:24.905494 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:32:24.905795 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:32:24.905879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:32:24.907274 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:32:24.909284 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:32:24.918331 systemd[1]: Switching root. Mar 19 11:32:24.949429 systemd-journald[239]: Journal stopped Mar 19 11:32:25.667710 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Mar 19 11:32:25.667766 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:32:25.667778 kernel: SELinux: policy capability open_perms=1 Mar 19 11:32:25.667788 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:32:25.667800 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:32:25.667810 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:32:25.667819 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:32:25.667828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:32:25.667837 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:32:25.667846 kernel: audit: type=1403 audit(1742383945.082:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:32:25.667856 systemd[1]: Successfully loaded SELinux policy in 31.414ms. Mar 19 11:32:25.667882 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.274ms. Mar 19 11:32:25.667895 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:32:25.667905 systemd[1]: Detected virtualization kvm. Mar 19 11:32:25.667915 systemd[1]: Detected architecture arm64. Mar 19 11:32:25.667926 systemd[1]: Detected first boot. Mar 19 11:32:25.667935 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:32:25.667946 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:32:25.667966 zram_generator::config[1048]: No configuration found. Mar 19 11:32:25.668026 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:32:25.668041 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:32:25.668061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:32:25.668073 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:32:25.668085 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:32:25.668096 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:32:25.668106 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:32:25.668117 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:32:25.668127 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:32:25.668138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:32:25.668158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:32:25.668170 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:32:25.668181 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:32:25.668192 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:32:25.668202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:32:25.668213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:32:25.668270 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:32:25.668285 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:32:25.668296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:32:25.668309 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:32:25.668319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:32:25.668329 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:32:25.668339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:32:25.668349 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:32:25.668361 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:32:25.668375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:32:25.668386 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:32:25.668397 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:32:25.668408 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:32:25.668418 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:32:25.668428 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:32:25.668438 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:32:25.668448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:32:25.668458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:32:25.668468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:32:25.668478 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:32:25.668490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:32:25.668501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:32:25.668511 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:32:25.668521 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:32:25.668531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:32:25.668541 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:32:25.668551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:32:25.668561 systemd[1]: Reached target machines.target - Containers. Mar 19 11:32:25.668571 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:32:25.668584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:32:25.668594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:32:25.668604 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:32:25.668614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:32:25.668624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:32:25.668634 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:32:25.668644 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:32:25.668654 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:32:25.668665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:32:25.668676 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:32:25.668686 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:32:25.668704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:32:25.668715 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:32:25.668725 kernel: fuse: init (API version 7.39) Mar 19 11:32:25.668735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:32:25.668745 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:32:25.668756 kernel: loop: module loaded Mar 19 11:32:25.668766 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:32:25.668776 kernel: ACPI: bus type drm_connector registered Mar 19 11:32:25.668785 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:32:25.668795 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:32:25.668806 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:32:25.668816 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:32:25.668826 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:32:25.668836 systemd[1]: Stopped verity-setup.service. Mar 19 11:32:25.668867 systemd-journald[1123]: Collecting audit messages is disabled. Mar 19 11:32:25.668887 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:32:25.668897 systemd-journald[1123]: Journal started Mar 19 11:32:25.668919 systemd-journald[1123]: Runtime Journal (/run/log/journal/648605f020504c26aec9ae1582925321) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:32:25.471167 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:32:25.483421 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 11:32:25.483796 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:32:25.671472 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:32:25.672102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:32:25.673301 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:32:25.674338 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:32:25.675493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:32:25.676669 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:32:25.679729 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:32:25.681047 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:32:25.684040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:32:25.684217 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:32:25.685543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:32:25.685723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:32:25.686987 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:32:25.687148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:32:25.688378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:32:25.688528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:32:25.690023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:32:25.690188 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:32:25.691507 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:32:25.692740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:32:25.694078 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:32:25.695473 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:32:25.696928 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:32:25.698415 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:32:25.709978 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:32:25.719790 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:32:25.721740 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:32:25.722803 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:32:25.722840 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:32:25.724626 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:32:25.726692 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:32:25.728618 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:32:25.729684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:32:25.730829 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:32:25.732597 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:32:25.733801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:32:25.735856 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:32:25.736956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:32:25.740558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:32:25.742522 systemd-journald[1123]: Time spent on flushing to /var/log/journal/648605f020504c26aec9ae1582925321 is 13.914ms for 869 entries. Mar 19 11:32:25.742522 systemd-journald[1123]: System Journal (/var/log/journal/648605f020504c26aec9ae1582925321) is 8M, max 195.6M, 187.6M free. Mar 19 11:32:25.770247 systemd-journald[1123]: Received client request to flush runtime journal. Mar 19 11:32:25.770293 kernel: loop0: detected capacity change from 0 to 194096 Mar 19 11:32:25.742861 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:32:25.748944 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:32:25.753735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:32:25.755091 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:32:25.756966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:32:25.759713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:32:25.764740 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:32:25.770771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:32:25.772380 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:32:25.775307 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:32:25.782755 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:32:25.784988 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:32:25.787865 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:32:25.790387 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 19 11:32:25.790406 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 19 11:32:25.796740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:32:25.814877 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:32:25.816183 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:32:25.820846 kernel: loop1: detected capacity change from 0 to 123192 Mar 19 11:32:25.820740 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:32:25.848979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:32:25.855892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:32:25.859761 kernel: loop2: detected capacity change from 0 to 113512 Mar 19 11:32:25.871423 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 19 11:32:25.871678 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 19 11:32:25.875465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:32:25.901331 kernel: loop3: detected capacity change from 0 to 194096 Mar 19 11:32:25.907956 kernel: loop4: detected capacity change from 0 to 123192 Mar 19 11:32:25.913733 kernel: loop5: detected capacity change from 0 to 113512 Mar 19 11:32:25.918291 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 19 11:32:25.918649 (sd-merge)[1197]: Merged extensions into '/usr'. Mar 19 11:32:25.921913 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:32:25.921929 systemd[1]: Reloading... Mar 19 11:32:25.985725 zram_generator::config[1228]: No configuration found. Mar 19 11:32:26.015520 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:32:26.064620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:32:26.113312 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:32:26.113386 systemd[1]: Reloading finished in 191 ms. Mar 19 11:32:26.131352 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:32:26.132863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:32:26.145867 systemd[1]: Starting ensure-sysext.service... Mar 19 11:32:26.147521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:32:26.157472 systemd[1]: Reload requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:32:26.157486 systemd[1]: Reloading... Mar 19 11:32:26.166004 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:32:26.166213 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:32:26.166840 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:32:26.167039 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 19 11:32:26.167082 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 19 11:32:26.169923 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:32:26.170026 systemd-tmpfiles[1260]: Skipping /boot Mar 19 11:32:26.178324 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:32:26.178423 systemd-tmpfiles[1260]: Skipping /boot Mar 19 11:32:26.196721 zram_generator::config[1285]: No configuration found. Mar 19 11:32:26.279972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:32:26.328804 systemd[1]: Reloading finished in 171 ms. Mar 19 11:32:26.342121 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:32:26.360708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:32:26.368016 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:32:26.370263 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:32:26.372452 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:32:26.375943 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:32:26.378996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:32:26.384581 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:32:26.388073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:32:26.389502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:32:26.391920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:32:26.395393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:32:26.396973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:32:26.397092 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:32:26.401331 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:32:26.404759 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:32:26.407040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:32:26.407189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:32:26.409237 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:32:26.409375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:32:26.411079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:32:26.411218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:32:26.421686 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:32:26.425891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:32:26.427954 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Mar 19 11:32:26.432931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:32:26.437417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:32:26.453866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:32:26.456030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:32:26.457180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:32:26.457297 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:32:26.458613 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:32:26.462580 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:32:26.472234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:32:26.474244 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:32:26.475920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:32:26.477741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:32:26.479279 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:32:26.479420 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:32:26.480889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:32:26.481026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:32:26.488474 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:32:26.488640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:32:26.490653 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:32:26.495483 systemd[1]: Finished ensure-sysext.service. Mar 19 11:32:26.506735 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:32:26.519859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:32:26.520837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:32:26.520902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:32:26.522447 augenrules[1397]: No rules Mar 19 11:32:26.529824 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:32:26.530892 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:32:26.531226 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:32:26.531405 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:32:26.546678 systemd-resolved[1329]: Positive Trust Anchors: Mar 19 11:32:26.548473 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:32:26.548505 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:32:26.552777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Mar 19 11:32:26.554191 systemd-resolved[1329]: Defaulting to hostname 'linux'. Mar 19 11:32:26.556449 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:32:26.557913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:32:26.595295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:32:26.610869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:32:26.613078 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:32:26.613665 systemd-networkd[1395]: lo: Link UP Mar 19 11:32:26.613678 systemd-networkd[1395]: lo: Gained carrier Mar 19 11:32:26.614595 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:32:26.616090 systemd-networkd[1395]: Enumeration completed Mar 19 11:32:26.616169 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:32:26.617625 systemd[1]: Reached target network.target - Network. Mar 19 11:32:26.618668 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:32:26.618676 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:32:26.619102 systemd-networkd[1395]: eth0: Link UP Mar 19 11:32:26.619111 systemd-networkd[1395]: eth0: Gained carrier Mar 19 11:32:26.619123 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:32:26.620422 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:32:26.624692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:32:26.629746 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:32:26.634098 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:32:26.635098 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 19 11:32:26.635617 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 19 11:32:26.635665 systemd-timesyncd[1402]: Initial clock synchronization to Wed 2025-03-19 11:32:26.860904 UTC. Mar 19 11:32:26.641800 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:32:26.656901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:32:26.665835 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:32:26.668505 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:32:26.682937 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:32:26.693738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:32:26.713977 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:32:26.715371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:32:26.716495 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:32:26.717629 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:32:26.718881 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:32:26.720213 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:32:26.721386 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:32:26.722605 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:32:26.723806 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:32:26.723842 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:32:26.724692 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:32:26.726562 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:32:26.728880 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:32:26.731893 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:32:26.733264 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:32:26.734492 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:32:26.738532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:32:26.739947 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:32:26.742071 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:32:26.743621 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:32:26.744811 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:32:26.745719 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:32:26.746632 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:32:26.746665 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:32:26.747504 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:32:26.749028 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:32:26.749455 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:32:26.751835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:32:26.759624 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:32:26.760794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:32:26.761721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:32:26.763834 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:32:26.765802 jq[1435]: false Mar 19 11:32:26.767884 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:32:26.770824 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:32:26.778779 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:32:26.781775 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:32:26.782556 extend-filesystems[1436]: Found loop3 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found loop4 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found loop5 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda1 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda2 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda3 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found usr Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda4 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda6 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda7 Mar 19 11:32:26.782556 extend-filesystems[1436]: Found vda9 Mar 19 11:32:26.782556 extend-filesystems[1436]: Checking size of /dev/vda9 Mar 19 11:32:26.830134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Mar 19 11:32:26.830171 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 19 11:32:26.782167 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:32:26.792896 dbus-daemon[1434]: [system] SELinux support is enabled Mar 19 11:32:26.830421 extend-filesystems[1436]: Resized partition /dev/vda9 Mar 19 11:32:26.782902 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:32:26.833382 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:32:26.788837 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:32:26.793798 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:32:26.841474 jq[1453]: true Mar 19 11:32:26.797241 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:32:26.800859 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:32:26.842847 tar[1457]: linux-arm64/helm Mar 19 11:32:26.801039 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:32:26.843570 jq[1460]: true Mar 19 11:32:26.801291 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:32:26.801452 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:32:26.803894 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:32:26.804047 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:32:26.823237 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:32:26.825155 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:32:26.825180 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:32:26.830583 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:32:26.830600 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:32:26.848308 update_engine[1452]: I20250319 11:32:26.848093 1452 main.cc:92] Flatcar Update Engine starting Mar 19 11:32:26.855101 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:32:26.856564 update_engine[1452]: I20250319 11:32:26.855365 1452 update_check_scheduler.cc:74] Next update check in 5m33s Mar 19 11:32:26.861763 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 19 11:32:26.869977 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:32:26.875726 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 11:32:26.875726 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:32:26.875726 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 19 11:32:26.883944 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Mar 19 11:32:26.877123 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:32:26.877386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:32:26.890279 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:32:26.890934 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (Power Button) Mar 19 11:32:26.891734 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:32:26.894092 systemd-logind[1447]: New seat seat0. Mar 19 11:32:26.897181 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:32:26.898808 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:32:26.937901 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:32:27.026314 containerd[1461]: time="2025-03-19T11:32:27.025606933Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:32:27.054368 containerd[1461]: time="2025-03-19T11:32:27.054265639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.055841 containerd[1461]: time="2025-03-19T11:32:27.055693806Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:32:27.055841 containerd[1461]: time="2025-03-19T11:32:27.055733329Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:32:27.055841 containerd[1461]: time="2025-03-19T11:32:27.055748381Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:32:27.055938 containerd[1461]: time="2025-03-19T11:32:27.055893517Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:32:27.055938 containerd[1461]: time="2025-03-19T11:32:27.055910050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056005 containerd[1461]: time="2025-03-19T11:32:27.055960554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056005 containerd[1461]: time="2025-03-19T11:32:27.055974825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056178 containerd[1461]: time="2025-03-19T11:32:27.056142992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056178 containerd[1461]: time="2025-03-19T11:32:27.056166434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056231 containerd[1461]: time="2025-03-19T11:32:27.056179800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056231 containerd[1461]: time="2025-03-19T11:32:27.056188560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056275 containerd[1461]: time="2025-03-19T11:32:27.056265344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056477 containerd[1461]: time="2025-03-19T11:32:27.056453252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056601 containerd[1461]: time="2025-03-19T11:32:27.056581444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:32:27.056601 containerd[1461]: time="2025-03-19T11:32:27.056599416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:32:27.056708 containerd[1461]: time="2025-03-19T11:32:27.056681793Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:32:27.056776 containerd[1461]: time="2025-03-19T11:32:27.056759152Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060237733Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060299464Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060315792Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060330186Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060344992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060480299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060682560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060818772Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060836950Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060864998Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060878570Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060892471Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060903822Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.062760 containerd[1461]: time="2025-03-19T11:32:27.060917887Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.060934338Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.060948239Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.060960700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.060971064Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.060990188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061003225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061015152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061026873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061037689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061050192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061061008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061073017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061085314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063051 containerd[1461]: time="2025-03-19T11:32:27.061099256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061110196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061121053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061132404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061146552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061165552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061177808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061187473Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061367155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061385580Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061395656Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061406555Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061415232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061427488Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:32:27.063276 containerd[1461]: time="2025-03-19T11:32:27.061437153Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:32:27.063499 containerd[1461]: time="2025-03-19T11:32:27.061446941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.061699089Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.061761437Z" level=info msg="Connect containerd service" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.061787923Z" level=info msg="using legacy CRI server" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.061796642Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.062034519Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.062598983Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.062831514Z" level=info msg="Start subscribing containerd event" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.062892710Z" level=info msg="Start recovering state" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.063032993Z" level=info msg="Start event monitor" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.063046401Z" level=info msg="Start snapshots syncer" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.063055490Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:32:27.063519 containerd[1461]: time="2025-03-19T11:32:27.063063674Z" level=info msg="Start streaming server" Mar 19 11:32:27.068052 containerd[1461]: time="2025-03-19T11:32:27.068023804Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:32:27.068304 containerd[1461]: time="2025-03-19T11:32:27.068285904Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:32:27.068488 containerd[1461]: time="2025-03-19T11:32:27.068473237Z" level=info msg="containerd successfully booted in 0.043935s" Mar 19 11:32:27.068557 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:32:27.217986 tar[1457]: linux-arm64/LICENSE Mar 19 11:32:27.218201 tar[1457]: linux-arm64/README.md Mar 19 11:32:27.228988 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:32:28.082303 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:32:28.100705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:32:28.111010 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:32:28.115551 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:32:28.115765 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:32:28.118743 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:32:28.129206 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:32:28.131931 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:32:28.134001 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:32:28.135328 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:32:28.389927 systemd-networkd[1395]: eth0: Gained IPv6LL Mar 19 11:32:28.395369 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:32:28.397170 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:32:28.406964 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 19 11:32:28.409228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:28.411315 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:32:28.426292 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 19 11:32:28.427218 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 19 11:32:28.428957 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:32:28.430802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:32:28.892776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:28.894309 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:32:28.899636 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:32:28.900781 systemd[1]: Startup finished in 546ms (kernel) + 6.384s (initrd) + 3.853s (userspace) = 10.784s. Mar 19 11:32:29.344734 kubelet[1547]: E0319 11:32:29.344570 1547 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:32:29.347082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:32:29.347233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:32:29.347547 systemd[1]: kubelet.service: Consumed 788ms CPU time, 237.4M memory peak. Mar 19 11:32:31.711954 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:32:31.713010 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:35468.service - OpenSSH per-connection server daemon (10.0.0.1:35468). Mar 19 11:32:31.778391 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 35468 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:31.779954 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:31.787458 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:32:31.796944 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:32:31.801621 systemd-logind[1447]: New session 1 of user core. Mar 19 11:32:31.805119 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:32:31.807988 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:32:31.812634 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:32:31.814553 systemd-logind[1447]: New session c1 of user core. Mar 19 11:32:31.930180 systemd[1565]: Queued start job for default target default.target. Mar 19 11:32:31.943588 systemd[1565]: Created slice app.slice - User Application Slice. Mar 19 11:32:31.943618 systemd[1565]: Reached target paths.target - Paths. Mar 19 11:32:31.943652 systemd[1565]: Reached target timers.target - Timers. Mar 19 11:32:31.944842 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:32:31.953669 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:32:31.953751 systemd[1565]: Reached target sockets.target - Sockets. Mar 19 11:32:31.953786 systemd[1565]: Reached target basic.target - Basic System. Mar 19 11:32:31.953814 systemd[1565]: Reached target default.target - Main User Target. Mar 19 11:32:31.953850 systemd[1565]: Startup finished in 134ms. Mar 19 11:32:31.953993 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:32:31.955275 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:32:32.020994 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Mar 19 11:32:32.062569 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:32.063829 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:32.068209 systemd-logind[1447]: New session 2 of user core. Mar 19 11:32:32.074895 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:32:32.126595 sshd[1578]: Connection closed by 10.0.0.1 port 35472 Mar 19 11:32:32.126979 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Mar 19 11:32:32.144774 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:35472.service: Deactivated successfully. Mar 19 11:32:32.146193 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:32:32.147432 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:32:32.148556 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:35486.service - OpenSSH per-connection server daemon (10.0.0.1:35486). Mar 19 11:32:32.150081 systemd-logind[1447]: Removed session 2. Mar 19 11:32:32.188596 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 35486 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:32.189661 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:32.193764 systemd-logind[1447]: New session 3 of user core. Mar 19 11:32:32.203855 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:32:32.251600 sshd[1586]: Connection closed by 10.0.0.1 port 35486 Mar 19 11:32:32.251859 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Mar 19 11:32:32.264786 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:35486.service: Deactivated successfully. Mar 19 11:32:32.266191 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:32:32.267392 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:32:32.268544 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:35498.service - OpenSSH per-connection server daemon (10.0.0.1:35498). Mar 19 11:32:32.269225 systemd-logind[1447]: Removed session 3. Mar 19 11:32:32.308713 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 35498 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:32.309842 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:32.313557 systemd-logind[1447]: New session 4 of user core. Mar 19 11:32:32.322869 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:32:32.373593 sshd[1594]: Connection closed by 10.0.0.1 port 35498 Mar 19 11:32:32.374021 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Mar 19 11:32:32.384680 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:35498.service: Deactivated successfully. Mar 19 11:32:32.386116 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:32:32.387922 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:32:32.402999 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:45662.service - OpenSSH per-connection server daemon (10.0.0.1:45662). Mar 19 11:32:32.403866 systemd-logind[1447]: Removed session 4. Mar 19 11:32:32.439273 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 45662 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:32.440295 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:32.444298 systemd-logind[1447]: New session 5 of user core. Mar 19 11:32:32.450847 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:32:32.507142 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:32:32.507394 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:32:32.518534 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 19 11:32:32.519996 sshd[1602]: Connection closed by 10.0.0.1 port 45662 Mar 19 11:32:32.520349 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Mar 19 11:32:32.530798 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:45662.service: Deactivated successfully. Mar 19 11:32:32.532181 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:32:32.533747 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:32:32.539963 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:45670.service - OpenSSH per-connection server daemon (10.0.0.1:45670). Mar 19 11:32:32.540749 systemd-logind[1447]: Removed session 5. Mar 19 11:32:32.577369 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 45670 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:32.578892 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:32.582757 systemd-logind[1447]: New session 6 of user core. Mar 19 11:32:32.592890 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:32:32.643845 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:32:32.644116 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:32:32.646828 sudo[1613]: pam_unix(sudo:session): session closed for user root Mar 19 11:32:32.651092 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:32:32.651355 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:32:32.671998 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:32:32.693218 augenrules[1635]: No rules Mar 19 11:32:32.694225 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:32:32.694449 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:32:32.695540 sudo[1612]: pam_unix(sudo:session): session closed for user root Mar 19 11:32:32.696823 sshd[1611]: Connection closed by 10.0.0.1 port 45670 Mar 19 11:32:32.696869 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Mar 19 11:32:32.702747 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:45670.service: Deactivated successfully. Mar 19 11:32:32.704090 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:32:32.705369 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:32:32.706917 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:45674.service - OpenSSH per-connection server daemon (10.0.0.1:45674). Mar 19 11:32:32.707810 systemd-logind[1447]: Removed session 6. Mar 19 11:32:32.746285 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 45674 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:32:32.747363 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:32:32.751314 systemd-logind[1447]: New session 7 of user core. Mar 19 11:32:32.761846 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:32:32.812381 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:32:32.813018 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:32:33.154977 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:32:33.155078 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:32:33.410995 dockerd[1666]: time="2025-03-19T11:32:33.410848510Z" level=info msg="Starting up" Mar 19 11:32:33.562420 dockerd[1666]: time="2025-03-19T11:32:33.562385039Z" level=info msg="Loading containers: start." Mar 19 11:32:33.714742 kernel: Initializing XFRM netlink socket Mar 19 11:32:33.775886 systemd-networkd[1395]: docker0: Link UP Mar 19 11:32:33.807772 dockerd[1666]: time="2025-03-19T11:32:33.807731807Z" level=info msg="Loading containers: done." Mar 19 11:32:33.822748 dockerd[1666]: time="2025-03-19T11:32:33.822689862Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:32:33.822886 dockerd[1666]: time="2025-03-19T11:32:33.822789830Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:32:33.822989 dockerd[1666]: time="2025-03-19T11:32:33.822956834Z" level=info msg="Daemon has completed initialization" Mar 19 11:32:33.851236 dockerd[1666]: time="2025-03-19T11:32:33.851059465Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:32:33.851315 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:32:34.695605 containerd[1461]: time="2025-03-19T11:32:34.695564444Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 19 11:32:35.595842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780082919.mount: Deactivated successfully. Mar 19 11:32:36.919655 containerd[1461]: time="2025-03-19T11:32:36.919607940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:36.920868 containerd[1461]: time="2025-03-19T11:32:36.920826978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Mar 19 11:32:36.921788 containerd[1461]: time="2025-03-19T11:32:36.921753599Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:36.924442 containerd[1461]: time="2025-03-19T11:32:36.924381952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:36.925652 containerd[1461]: time="2025-03-19T11:32:36.925621723Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.230011197s" Mar 19 11:32:36.925699 containerd[1461]: time="2025-03-19T11:32:36.925658714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 19 11:32:36.946158 containerd[1461]: time="2025-03-19T11:32:36.946128738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 19 11:32:38.757285 containerd[1461]: time="2025-03-19T11:32:38.757198285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:38.758585 containerd[1461]: time="2025-03-19T11:32:38.758346885Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Mar 19 11:32:38.759570 containerd[1461]: time="2025-03-19T11:32:38.759519560Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:38.762797 containerd[1461]: time="2025-03-19T11:32:38.762758989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:38.763967 containerd[1461]: time="2025-03-19T11:32:38.763911574Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.817746023s" Mar 19 11:32:38.763967 containerd[1461]: time="2025-03-19T11:32:38.763942735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 19 11:32:38.781340 containerd[1461]: time="2025-03-19T11:32:38.781314328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 19 11:32:39.598150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:32:39.607848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:39.692332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:39.695724 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:32:39.810273 kubelet[1952]: E0319 11:32:39.810221 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:32:39.813959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:32:39.814098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:32:39.814580 systemd[1]: kubelet.service: Consumed 126ms CPU time, 97.8M memory peak. Mar 19 11:32:39.988963 containerd[1461]: time="2025-03-19T11:32:39.988829873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:39.989754 containerd[1461]: time="2025-03-19T11:32:39.989695115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Mar 19 11:32:39.991055 containerd[1461]: time="2025-03-19T11:32:39.991008283Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:39.994038 containerd[1461]: time="2025-03-19T11:32:39.993988697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:39.995124 containerd[1461]: time="2025-03-19T11:32:39.995078686Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.213731353s" Mar 19 11:32:39.995124 containerd[1461]: time="2025-03-19T11:32:39.995110224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 19 11:32:40.013798 containerd[1461]: time="2025-03-19T11:32:40.013765224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 19 11:32:40.959112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074377743.mount: Deactivated successfully. Mar 19 11:32:41.247791 containerd[1461]: time="2025-03-19T11:32:41.247648094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:41.248583 containerd[1461]: time="2025-03-19T11:32:41.248450605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 19 11:32:41.249376 containerd[1461]: time="2025-03-19T11:32:41.249339168Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:41.251322 containerd[1461]: time="2025-03-19T11:32:41.251288158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:41.252181 containerd[1461]: time="2025-03-19T11:32:41.252153862Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.238341817s" Mar 19 11:32:41.252277 containerd[1461]: time="2025-03-19T11:32:41.252261688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 19 11:32:41.269788 containerd[1461]: time="2025-03-19T11:32:41.269740765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:32:41.799984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846778996.mount: Deactivated successfully. Mar 19 11:32:42.698328 containerd[1461]: time="2025-03-19T11:32:42.698275694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:42.699365 containerd[1461]: time="2025-03-19T11:32:42.699036977Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 19 11:32:42.700844 containerd[1461]: time="2025-03-19T11:32:42.700815712Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:42.703432 containerd[1461]: time="2025-03-19T11:32:42.703381427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:42.704708 containerd[1461]: time="2025-03-19T11:32:42.704663762Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.434784603s" Mar 19 11:32:42.704708 containerd[1461]: time="2025-03-19T11:32:42.704695643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 19 11:32:42.722787 containerd[1461]: time="2025-03-19T11:32:42.722734948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 19 11:32:43.104221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794505525.mount: Deactivated successfully. Mar 19 11:32:43.108588 containerd[1461]: time="2025-03-19T11:32:43.107868996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:43.109587 containerd[1461]: time="2025-03-19T11:32:43.109552175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Mar 19 11:32:43.110505 containerd[1461]: time="2025-03-19T11:32:43.110476118Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:43.113040 containerd[1461]: time="2025-03-19T11:32:43.113009796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:43.113681 containerd[1461]: time="2025-03-19T11:32:43.113652687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 390.883091ms" Mar 19 11:32:43.113807 containerd[1461]: time="2025-03-19T11:32:43.113788096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 19 11:32:43.131707 containerd[1461]: time="2025-03-19T11:32:43.131680684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 19 11:32:43.567457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499138874.mount: Deactivated successfully. Mar 19 11:32:46.155641 containerd[1461]: time="2025-03-19T11:32:46.155592358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:46.156410 containerd[1461]: time="2025-03-19T11:32:46.156372051Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Mar 19 11:32:46.157351 containerd[1461]: time="2025-03-19T11:32:46.157292738Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:46.160335 containerd[1461]: time="2025-03-19T11:32:46.160299142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:32:46.161845 containerd[1461]: time="2025-03-19T11:32:46.161791980Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.030062067s" Mar 19 11:32:46.161845 containerd[1461]: time="2025-03-19T11:32:46.161839005Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 19 11:32:49.839427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:32:49.848979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:49.937166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:49.940507 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:32:49.975494 kubelet[2179]: E0319 11:32:49.975455 2179 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:32:49.978182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:32:49.978313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:32:49.978591 systemd[1]: kubelet.service: Consumed 115ms CPU time, 98.6M memory peak. Mar 19 11:32:52.063011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:52.063297 systemd[1]: kubelet.service: Consumed 115ms CPU time, 98.6M memory peak. Mar 19 11:32:52.074016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:52.088627 systemd[1]: Reload requested from client PID 2194 ('systemctl') (unit session-7.scope)... Mar 19 11:32:52.088643 systemd[1]: Reloading... Mar 19 11:32:52.165763 zram_generator::config[2237]: No configuration found. Mar 19 11:32:52.318619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:32:52.389573 systemd[1]: Reloading finished in 300 ms. Mar 19 11:32:52.428245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:52.431934 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:52.432855 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:32:52.433785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:52.433823 systemd[1]: kubelet.service: Consumed 73ms CPU time, 82.4M memory peak. Mar 19 11:32:52.435519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:52.528306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:52.531327 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:32:52.569991 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:32:52.569991 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:32:52.569991 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:32:52.570765 kubelet[2285]: I0319 11:32:52.570725 2285 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:32:53.433591 kubelet[2285]: I0319 11:32:53.433544 2285 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 11:32:53.433591 kubelet[2285]: I0319 11:32:53.433576 2285 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:32:53.433858 kubelet[2285]: I0319 11:32:53.433829 2285 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 11:32:53.458346 kubelet[2285]: E0319 11:32:53.458316 2285 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.458591 kubelet[2285]: I0319 11:32:53.458551 2285 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:32:53.468879 kubelet[2285]: I0319 11:32:53.468824 2285 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:32:53.469284 kubelet[2285]: I0319 11:32:53.469247 2285 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:32:53.469441 kubelet[2285]: I0319 11:32:53.469275 2285 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 11:32:53.469530 kubelet[2285]: I0319 11:32:53.469509 2285 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:32:53.469530 kubelet[2285]: I0319 11:32:53.469519 2285 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 11:32:53.469749 kubelet[2285]: I0319 11:32:53.469725 2285 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:32:53.470688 kubelet[2285]: I0319 11:32:53.470663 2285 kubelet.go:400] "Attempting to sync node with API server" Mar 19 11:32:53.470688 kubelet[2285]: I0319 11:32:53.470685 2285 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:32:53.471728 kubelet[2285]: I0319 11:32:53.470967 2285 kubelet.go:312] "Adding apiserver pod source" Mar 19 11:32:53.471728 kubelet[2285]: I0319 11:32:53.471178 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:32:53.471728 kubelet[2285]: W0319 11:32:53.471486 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.471728 kubelet[2285]: E0319 11:32:53.471533 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.471886 kubelet[2285]: W0319 11:32:53.471832 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.471886 kubelet[2285]: E0319 11:32:53.471886 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.472163 kubelet[2285]: I0319 11:32:53.472137 2285 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:32:53.472538 kubelet[2285]: I0319 11:32:53.472516 2285 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:32:53.472642 kubelet[2285]: W0319 11:32:53.472626 2285 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:32:53.473435 kubelet[2285]: I0319 11:32:53.473405 2285 server.go:1264] "Started kubelet" Mar 19 11:32:53.474200 kubelet[2285]: I0319 11:32:53.474136 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:32:53.474948 kubelet[2285]: I0319 11:32:53.474915 2285 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:32:53.475011 kubelet[2285]: I0319 11:32:53.474963 2285 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:32:53.475011 kubelet[2285]: I0319 11:32:53.474976 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:32:53.477221 kubelet[2285]: I0319 11:32:53.475974 2285 server.go:455] "Adding debug handlers to kubelet server" Mar 19 11:32:53.480416 kubelet[2285]: E0319 11:32:53.480248 2285 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e30fe0936fda6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:32:53.473385894 +0000 UTC m=+0.939342627,LastTimestamp:2025-03-19 11:32:53.473385894 +0000 UTC m=+0.939342627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:32:53.480797 kubelet[2285]: I0319 11:32:53.480780 2285 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 11:32:53.480975 kubelet[2285]: I0319 11:32:53.480960 2285 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:32:53.481256 kubelet[2285]: I0319 11:32:53.481242 2285 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:32:53.481625 kubelet[2285]: W0319 11:32:53.481586 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.481754 kubelet[2285]: E0319 11:32:53.481739 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.482300 kubelet[2285]: E0319 11:32:53.482228 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Mar 19 11:32:53.484013 kubelet[2285]: I0319 11:32:53.483982 2285 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:32:53.484083 kubelet[2285]: I0319 11:32:53.484063 2285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:32:53.484988 kubelet[2285]: I0319 11:32:53.484958 2285 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:32:53.496879 kubelet[2285]: I0319 11:32:53.496768 2285 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:32:53.496879 kubelet[2285]: I0319 11:32:53.496864 2285 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:32:53.496879 kubelet[2285]: I0319 11:32:53.496878 2285 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:32:53.499126 kubelet[2285]: I0319 11:32:53.499030 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:32:53.500206 kubelet[2285]: I0319 11:32:53.500170 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:32:53.500415 kubelet[2285]: I0319 11:32:53.500323 2285 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:32:53.500415 kubelet[2285]: I0319 11:32:53.500338 2285 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 11:32:53.500415 kubelet[2285]: E0319 11:32:53.500376 2285 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:32:53.501128 kubelet[2285]: W0319 11:32:53.501075 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.501128 kubelet[2285]: E0319 11:32:53.501126 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:53.561932 kubelet[2285]: I0319 11:32:53.561831 2285 policy_none.go:49] "None policy: Start" Mar 19 11:32:53.562547 kubelet[2285]: I0319 11:32:53.562519 2285 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:32:53.562547 kubelet[2285]: I0319 11:32:53.562544 2285 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:32:53.567453 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:32:53.580909 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:32:53.581587 kubelet[2285]: I0319 11:32:53.581558 2285 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:32:53.581937 kubelet[2285]: E0319 11:32:53.581864 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 19 11:32:53.583433 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:32:53.595496 kubelet[2285]: I0319 11:32:53.595456 2285 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:32:53.595784 kubelet[2285]: I0319 11:32:53.595632 2285 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:32:53.595784 kubelet[2285]: I0319 11:32:53.595772 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:32:53.596970 kubelet[2285]: E0319 11:32:53.596938 2285 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 19 11:32:53.601455 kubelet[2285]: I0319 11:32:53.601409 2285 topology_manager.go:215] "Topology Admit Handler" podUID="452c4d9458011503546d166c18e819d9" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 19 11:32:53.602336 kubelet[2285]: I0319 11:32:53.602316 2285 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 19 11:32:53.603064 kubelet[2285]: I0319 11:32:53.603041 2285 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 19 11:32:53.608026 systemd[1]: Created slice kubepods-burstable-pod452c4d9458011503546d166c18e819d9.slice - libcontainer container kubepods-burstable-pod452c4d9458011503546d166c18e819d9.slice. Mar 19 11:32:53.619943 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 19 11:32:53.633980 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 19 11:32:53.683157 kubelet[2285]: E0319 11:32:53.683105 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Mar 19 11:32:53.782867 kubelet[2285]: I0319 11:32:53.782543 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452c4d9458011503546d166c18e819d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"452c4d9458011503546d166c18e819d9\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:53.782867 kubelet[2285]: I0319 11:32:53.782575 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:53.782867 kubelet[2285]: I0319 11:32:53.782598 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:53.782867 kubelet[2285]: I0319 11:32:53.782624 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:32:53.782867 kubelet[2285]: I0319 11:32:53.782650 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452c4d9458011503546d166c18e819d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"452c4d9458011503546d166c18e819d9\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:53.783716 kubelet[2285]: I0319 11:32:53.782677 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452c4d9458011503546d166c18e819d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"452c4d9458011503546d166c18e819d9\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:53.783716 kubelet[2285]: I0319 11:32:53.782732 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:53.783716 kubelet[2285]: I0319 11:32:53.782762 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:53.783716 kubelet[2285]: I0319 11:32:53.782787 2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:53.783716 kubelet[2285]: I0319 11:32:53.783153 2285 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:32:53.783716 kubelet[2285]: E0319 11:32:53.783382 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 19 11:32:53.918583 kubelet[2285]: E0319 11:32:53.918524 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:53.919481 containerd[1461]: time="2025-03-19T11:32:53.919372875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:452c4d9458011503546d166c18e819d9,Namespace:kube-system,Attempt:0,}" Mar 19 11:32:53.933041 kubelet[2285]: E0319 11:32:53.932993 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:53.933546 containerd[1461]: time="2025-03-19T11:32:53.933321789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 19 11:32:53.935881 kubelet[2285]: E0319 11:32:53.935853 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:53.936165 containerd[1461]: time="2025-03-19T11:32:53.936140813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 19 11:32:54.083545 kubelet[2285]: E0319 11:32:54.083435 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Mar 19 11:32:54.185280 kubelet[2285]: I0319 11:32:54.185259 2285 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:32:54.185669 kubelet[2285]: E0319 11:32:54.185640 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 19 11:32:54.404990 kubelet[2285]: W0319 11:32:54.404960 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.404990 kubelet[2285]: E0319 11:32:54.404996 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.434283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117960297.mount: Deactivated successfully. Mar 19 11:32:54.438987 containerd[1461]: time="2025-03-19T11:32:54.438943051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:32:54.439437 kubelet[2285]: W0319 11:32:54.439347 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.439437 kubelet[2285]: E0319 11:32:54.439417 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.440838 containerd[1461]: time="2025-03-19T11:32:54.440791545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 19 11:32:54.442567 containerd[1461]: time="2025-03-19T11:32:54.442532517Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:32:54.443674 containerd[1461]: time="2025-03-19T11:32:54.443635001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:32:54.444241 containerd[1461]: time="2025-03-19T11:32:54.444203035Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:32:54.446362 containerd[1461]: time="2025-03-19T11:32:54.446320135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:32:54.447031 containerd[1461]: time="2025-03-19T11:32:54.446677408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:32:54.449722 containerd[1461]: time="2025-03-19T11:32:54.449269150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.802193ms" Mar 19 11:32:54.450650 containerd[1461]: time="2025-03-19T11:32:54.450585838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:32:54.451577 containerd[1461]: time="2025-03-19T11:32:54.451553418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.36156ms" Mar 19 11:32:54.453363 containerd[1461]: time="2025-03-19T11:32:54.453327735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.947015ms" Mar 19 11:32:54.599095 containerd[1461]: time="2025-03-19T11:32:54.599000250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:32:54.599577 containerd[1461]: time="2025-03-19T11:32:54.599116899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:32:54.599577 containerd[1461]: time="2025-03-19T11:32:54.599187073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:32:54.599577 containerd[1461]: time="2025-03-19T11:32:54.599318654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:32:54.599577 containerd[1461]: time="2025-03-19T11:32:54.599375697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:32:54.599577 containerd[1461]: time="2025-03-19T11:32:54.599393591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:32:54.599577 containerd[1461]: time="2025-03-19T11:32:54.599444790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:32:54.599826 containerd[1461]: time="2025-03-19T11:32:54.599459321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:32:54.603238 containerd[1461]: time="2025-03-19T11:32:54.603149104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:32:54.603238 containerd[1461]: time="2025-03-19T11:32:54.603201264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:32:54.603238 containerd[1461]: time="2025-03-19T11:32:54.603224682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:32:54.603343 containerd[1461]: time="2025-03-19T11:32:54.603290052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:32:54.624876 systemd[1]: Started cri-containerd-34d37a48338805199e9377bf2091969c710b788f01463227d8a455369652d35a.scope - libcontainer container 34d37a48338805199e9377bf2091969c710b788f01463227d8a455369652d35a. Mar 19 11:32:54.626198 systemd[1]: Started cri-containerd-935fe9a7a5504a5ba979c478de0a22fa9e88c254a057c7f0f45f7fcbde4bfe44.scope - libcontainer container 935fe9a7a5504a5ba979c478de0a22fa9e88c254a057c7f0f45f7fcbde4bfe44. Mar 19 11:32:54.628421 systemd[1]: Started cri-containerd-ebd3600479cad45c9fa27430a17f2fbd6f71e6423e4a65913833276de5e3394a.scope - libcontainer container ebd3600479cad45c9fa27430a17f2fbd6f71e6423e4a65913833276de5e3394a. Mar 19 11:32:54.657013 containerd[1461]: time="2025-03-19T11:32:54.656882448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:452c4d9458011503546d166c18e819d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"34d37a48338805199e9377bf2091969c710b788f01463227d8a455369652d35a\"" Mar 19 11:32:54.658051 kubelet[2285]: E0319 11:32:54.658019 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:54.661169 containerd[1461]: time="2025-03-19T11:32:54.661135782Z" level=info msg="CreateContainer within sandbox \"34d37a48338805199e9377bf2091969c710b788f01463227d8a455369652d35a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:32:54.662212 containerd[1461]: time="2025-03-19T11:32:54.662188227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"935fe9a7a5504a5ba979c478de0a22fa9e88c254a057c7f0f45f7fcbde4bfe44\"" Mar 19 11:32:54.663279 containerd[1461]: time="2025-03-19T11:32:54.663193356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebd3600479cad45c9fa27430a17f2fbd6f71e6423e4a65913833276de5e3394a\"" Mar 19 11:32:54.663955 kubelet[2285]: E0319 11:32:54.663789 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:54.663955 kubelet[2285]: E0319 11:32:54.663926 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:54.666444 containerd[1461]: time="2025-03-19T11:32:54.666267868Z" level=info msg="CreateContainer within sandbox \"ebd3600479cad45c9fa27430a17f2fbd6f71e6423e4a65913833276de5e3394a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:32:54.666444 containerd[1461]: time="2025-03-19T11:32:54.666408735Z" level=info msg="CreateContainer within sandbox \"935fe9a7a5504a5ba979c478de0a22fa9e88c254a057c7f0f45f7fcbde4bfe44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:32:54.678335 kubelet[2285]: W0319 11:32:54.678269 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.678335 kubelet[2285]: E0319 11:32:54.678333 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.682451 containerd[1461]: time="2025-03-19T11:32:54.682411937Z" level=info msg="CreateContainer within sandbox \"935fe9a7a5504a5ba979c478de0a22fa9e88c254a057c7f0f45f7fcbde4bfe44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d6893363ef71d08c2b2409e657cffc6295152f68c68a7b2f031e3a9520bd102\"" Mar 19 11:32:54.683109 containerd[1461]: time="2025-03-19T11:32:54.683085212Z" level=info msg="StartContainer for \"1d6893363ef71d08c2b2409e657cffc6295152f68c68a7b2f031e3a9520bd102\"" Mar 19 11:32:54.683955 containerd[1461]: time="2025-03-19T11:32:54.683921972Z" level=info msg="CreateContainer within sandbox \"34d37a48338805199e9377bf2091969c710b788f01463227d8a455369652d35a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3cd07c4b1baf315bf86eee03e1a11a48b41ddd219faa40380a3f85c6342e1da4\"" Mar 19 11:32:54.684300 containerd[1461]: time="2025-03-19T11:32:54.684271600Z" level=info msg="StartContainer for \"3cd07c4b1baf315bf86eee03e1a11a48b41ddd219faa40380a3f85c6342e1da4\"" Mar 19 11:32:54.686789 containerd[1461]: time="2025-03-19T11:32:54.686681403Z" level=info msg="CreateContainer within sandbox \"ebd3600479cad45c9fa27430a17f2fbd6f71e6423e4a65913833276de5e3394a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2fec2568065654e4b3bbc9f1f367d2451de6c947ffb9740b761755d58522057a\"" Mar 19 11:32:54.687269 containerd[1461]: time="2025-03-19T11:32:54.687242352Z" level=info msg="StartContainer for \"2fec2568065654e4b3bbc9f1f367d2451de6c947ffb9740b761755d58522057a\"" Mar 19 11:32:54.705860 systemd[1]: Started cri-containerd-1d6893363ef71d08c2b2409e657cffc6295152f68c68a7b2f031e3a9520bd102.scope - libcontainer container 1d6893363ef71d08c2b2409e657cffc6295152f68c68a7b2f031e3a9520bd102. Mar 19 11:32:54.709394 systemd[1]: Started cri-containerd-2fec2568065654e4b3bbc9f1f367d2451de6c947ffb9740b761755d58522057a.scope - libcontainer container 2fec2568065654e4b3bbc9f1f367d2451de6c947ffb9740b761755d58522057a. Mar 19 11:32:54.710748 systemd[1]: Started cri-containerd-3cd07c4b1baf315bf86eee03e1a11a48b41ddd219faa40380a3f85c6342e1da4.scope - libcontainer container 3cd07c4b1baf315bf86eee03e1a11a48b41ddd219faa40380a3f85c6342e1da4. Mar 19 11:32:54.767888 containerd[1461]: time="2025-03-19T11:32:54.767842489Z" level=info msg="StartContainer for \"1d6893363ef71d08c2b2409e657cffc6295152f68c68a7b2f031e3a9520bd102\" returns successfully" Mar 19 11:32:54.768291 containerd[1461]: time="2025-03-19T11:32:54.768089638Z" level=info msg="StartContainer for \"2fec2568065654e4b3bbc9f1f367d2451de6c947ffb9740b761755d58522057a\" returns successfully" Mar 19 11:32:54.768291 containerd[1461]: time="2025-03-19T11:32:54.768113376Z" level=info msg="StartContainer for \"3cd07c4b1baf315bf86eee03e1a11a48b41ddd219faa40380a3f85c6342e1da4\" returns successfully" Mar 19 11:32:54.884633 kubelet[2285]: E0319 11:32:54.884542 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Mar 19 11:32:54.948357 kubelet[2285]: W0319 11:32:54.948186 2285 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.948357 kubelet[2285]: E0319 11:32:54.948255 2285 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Mar 19 11:32:54.987309 kubelet[2285]: I0319 11:32:54.987101 2285 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:32:55.509396 kubelet[2285]: E0319 11:32:55.509326 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:55.512499 kubelet[2285]: E0319 11:32:55.511972 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:55.513140 kubelet[2285]: E0319 11:32:55.513118 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:56.325171 kubelet[2285]: I0319 11:32:56.325124 2285 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 19 11:32:56.473319 kubelet[2285]: I0319 11:32:56.473283 2285 apiserver.go:52] "Watching apiserver" Mar 19 11:32:56.481581 kubelet[2285]: I0319 11:32:56.481559 2285 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:32:56.519398 kubelet[2285]: E0319 11:32:56.519357 2285 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:56.520580 kubelet[2285]: E0319 11:32:56.519820 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:58.475076 systemd[1]: Reload requested from client PID 2562 ('systemctl') (unit session-7.scope)... Mar 19 11:32:58.475093 systemd[1]: Reloading... Mar 19 11:32:58.544960 zram_generator::config[2609]: No configuration found. Mar 19 11:32:58.623919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:32:58.706210 systemd[1]: Reloading finished in 230 ms. Mar 19 11:32:58.725692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:58.738622 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:32:58.738921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:58.738972 systemd[1]: kubelet.service: Consumed 1.298s CPU time, 119.5M memory peak. Mar 19 11:32:58.747976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:32:58.845690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:32:58.849653 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:32:58.904744 kubelet[2648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:32:58.904744 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:32:58.904744 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:32:58.905088 kubelet[2648]: I0319 11:32:58.904794 2648 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:32:58.909878 kubelet[2648]: I0319 11:32:58.909851 2648 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 11:32:58.909878 kubelet[2648]: I0319 11:32:58.909875 2648 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:32:58.910049 kubelet[2648]: I0319 11:32:58.910033 2648 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 11:32:58.912272 kubelet[2648]: I0319 11:32:58.912245 2648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:32:58.913838 kubelet[2648]: I0319 11:32:58.913809 2648 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:32:58.919324 kubelet[2648]: I0319 11:32:58.919288 2648 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:32:58.919511 kubelet[2648]: I0319 11:32:58.919445 2648 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:32:58.921731 kubelet[2648]: I0319 11:32:58.919470 2648 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 11:32:58.921731 kubelet[2648]: I0319 11:32:58.919614 2648 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:32:58.921731 kubelet[2648]: I0319 11:32:58.919621 2648 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 11:32:58.921731 kubelet[2648]: I0319 11:32:58.919652 2648 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:32:58.921731 kubelet[2648]: I0319 11:32:58.919820 2648 kubelet.go:400] "Attempting to sync node with API server" Mar 19 11:32:58.921924 kubelet[2648]: I0319 11:32:58.919838 2648 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:32:58.921924 kubelet[2648]: I0319 11:32:58.919868 2648 kubelet.go:312] "Adding apiserver pod source" Mar 19 11:32:58.921924 kubelet[2648]: I0319 11:32:58.920256 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:32:58.921924 kubelet[2648]: I0319 11:32:58.920887 2648 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:32:58.921924 kubelet[2648]: I0319 11:32:58.921894 2648 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:32:58.922277 kubelet[2648]: I0319 11:32:58.922246 2648 server.go:1264] "Started kubelet" Mar 19 11:32:58.923807 kubelet[2648]: I0319 11:32:58.923760 2648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:32:58.923941 kubelet[2648]: I0319 11:32:58.923921 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:32:58.924008 kubelet[2648]: I0319 11:32:58.923981 2648 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:32:58.924038 kubelet[2648]: I0319 11:32:58.924026 2648 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:32:58.924948 kubelet[2648]: I0319 11:32:58.924922 2648 server.go:455] "Adding debug handlers to kubelet server" Mar 19 11:32:58.933726 kubelet[2648]: E0319 11:32:58.933679 2648 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:32:58.936420 kubelet[2648]: I0319 11:32:58.934208 2648 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 11:32:58.936420 kubelet[2648]: I0319 11:32:58.934240 2648 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:32:58.936420 kubelet[2648]: I0319 11:32:58.935575 2648 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:32:58.936420 kubelet[2648]: E0319 11:32:58.936154 2648 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:32:58.948692 kubelet[2648]: I0319 11:32:58.948656 2648 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:32:58.948775 kubelet[2648]: I0319 11:32:58.948744 2648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:32:58.950399 kubelet[2648]: I0319 11:32:58.950363 2648 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:32:58.955966 kubelet[2648]: I0319 11:32:58.955939 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:32:58.956900 kubelet[2648]: I0319 11:32:58.956879 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:32:58.957002 kubelet[2648]: I0319 11:32:58.956914 2648 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:32:58.957002 kubelet[2648]: I0319 11:32:58.956927 2648 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 11:32:58.957002 kubelet[2648]: E0319 11:32:58.956962 2648 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:32:58.985361 kubelet[2648]: I0319 11:32:58.985266 2648 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:32:58.985361 kubelet[2648]: I0319 11:32:58.985285 2648 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:32:58.985361 kubelet[2648]: I0319 11:32:58.985302 2648 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:32:58.986273 kubelet[2648]: I0319 11:32:58.986245 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:32:58.986273 kubelet[2648]: I0319 11:32:58.986265 2648 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:32:58.986370 kubelet[2648]: I0319 11:32:58.986282 2648 policy_none.go:49] "None policy: Start" Mar 19 11:32:58.986848 kubelet[2648]: I0319 11:32:58.986829 2648 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:32:58.986900 kubelet[2648]: I0319 11:32:58.986855 2648 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:32:58.987008 kubelet[2648]: I0319 11:32:58.986973 2648 state_mem.go:75] "Updated machine memory state" Mar 19 11:32:58.990786 kubelet[2648]: I0319 11:32:58.990707 2648 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:32:58.990896 kubelet[2648]: I0319 11:32:58.990858 2648 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:32:58.991072 kubelet[2648]: I0319 11:32:58.990962 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:32:59.039243 kubelet[2648]: I0319 11:32:59.039217 2648 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:32:59.044928 kubelet[2648]: I0319 11:32:59.044831 2648 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 19 11:32:59.044928 kubelet[2648]: I0319 11:32:59.044895 2648 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 19 11:32:59.058016 kubelet[2648]: I0319 11:32:59.057983 2648 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 19 11:32:59.058122 kubelet[2648]: I0319 11:32:59.058094 2648 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 19 11:32:59.058671 kubelet[2648]: I0319 11:32:59.058130 2648 topology_manager.go:215] "Topology Admit Handler" podUID="452c4d9458011503546d166c18e819d9" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 19 11:32:59.136566 kubelet[2648]: I0319 11:32:59.136516 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:59.136958 kubelet[2648]: I0319 11:32:59.136939 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:59.136989 kubelet[2648]: I0319 11:32:59.136967 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:32:59.136989 kubelet[2648]: I0319 11:32:59.136983 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452c4d9458011503546d166c18e819d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"452c4d9458011503546d166c18e819d9\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:59.137050 kubelet[2648]: I0319 11:32:59.136998 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452c4d9458011503546d166c18e819d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"452c4d9458011503546d166c18e819d9\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:59.137050 kubelet[2648]: I0319 11:32:59.137015 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:59.137050 kubelet[2648]: I0319 11:32:59.137030 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:59.137167 kubelet[2648]: I0319 11:32:59.137093 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:59.137194 kubelet[2648]: I0319 11:32:59.137177 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452c4d9458011503546d166c18e819d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"452c4d9458011503546d166c18e819d9\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:59.363860 kubelet[2648]: E0319 11:32:59.363805 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:59.363860 kubelet[2648]: E0319 11:32:59.363839 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:59.364138 kubelet[2648]: E0319 11:32:59.364035 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:59.921299 kubelet[2648]: I0319 11:32:59.921235 2648 apiserver.go:52] "Watching apiserver" Mar 19 11:32:59.936493 kubelet[2648]: I0319 11:32:59.936419 2648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:32:59.976020 kubelet[2648]: E0319 11:32:59.975983 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:59.982161 kubelet[2648]: E0319 11:32:59.982127 2648 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:32:59.982513 kubelet[2648]: E0319 11:32:59.982491 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:59.982825 kubelet[2648]: E0319 11:32:59.982802 2648 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 19 11:32:59.983342 kubelet[2648]: E0319 11:32:59.983298 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:32:59.998597 kubelet[2648]: I0319 11:32:59.997706 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.997669535 podStartE2EDuration="997.669535ms" podCreationTimestamp="2025-03-19 11:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:32:59.995510305 +0000 UTC m=+1.142760945" watchObservedRunningTime="2025-03-19 11:32:59.997669535 +0000 UTC m=+1.144920094" Mar 19 11:33:00.016914 kubelet[2648]: I0319 11:33:00.016856 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.01683823 podStartE2EDuration="1.01683823s" podCreationTimestamp="2025-03-19 11:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:33:00.016800254 +0000 UTC m=+1.164050813" watchObservedRunningTime="2025-03-19 11:33:00.01683823 +0000 UTC m=+1.164088829" Mar 19 11:33:00.017060 kubelet[2648]: I0319 11:33:00.016930 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.016925228 podStartE2EDuration="1.016925228s" podCreationTimestamp="2025-03-19 11:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:33:00.006207828 +0000 UTC m=+1.153458387" watchObservedRunningTime="2025-03-19 11:33:00.016925228 +0000 UTC m=+1.164175827" Mar 19 11:33:00.978067 kubelet[2648]: E0319 11:33:00.978023 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:00.980330 kubelet[2648]: E0319 11:33:00.978834 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:01.979063 kubelet[2648]: E0319 11:33:01.979025 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:03.040207 kubelet[2648]: E0319 11:33:03.040164 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:03.447728 kubelet[2648]: E0319 11:33:03.447349 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:03.505818 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:03.507181 sshd[1646]: Connection closed by 10.0.0.1 port 45674 Mar 19 11:33:03.507321 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:03.511261 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:33:03.511387 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:45674.service: Deactivated successfully. Mar 19 11:33:03.513142 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:33:03.513306 systemd[1]: session-7.scope: Consumed 7.717s CPU time, 247.3M memory peak. Mar 19 11:33:03.514980 systemd-logind[1447]: Removed session 7. Mar 19 11:33:11.647131 kubelet[2648]: E0319 11:33:11.647082 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:12.117137 update_engine[1452]: I20250319 11:33:12.117062 1452 update_attempter.cc:509] Updating boot flags... Mar 19 11:33:12.161716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2749) Mar 19 11:33:12.198538 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2752) Mar 19 11:33:12.223602 kubelet[2648]: I0319 11:33:12.223556 2648 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:33:12.243721 containerd[1461]: time="2025-03-19T11:33:12.243076267Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:33:12.245802 kubelet[2648]: I0319 11:33:12.244110 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:33:12.279814 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2752) Mar 19 11:33:13.048450 kubelet[2648]: E0319 11:33:13.048378 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:13.195642 kubelet[2648]: I0319 11:33:13.195592 2648 topology_manager.go:215] "Topology Admit Handler" podUID="d592b675-3c07-4642-8654-bc5df7f4807e" podNamespace="kube-system" podName="kube-proxy-snqt5" Mar 19 11:33:13.206742 systemd[1]: Created slice kubepods-besteffort-podd592b675_3c07_4642_8654_bc5df7f4807e.slice - libcontainer container kubepods-besteffort-podd592b675_3c07_4642_8654_bc5df7f4807e.slice. Mar 19 11:33:13.229882 kubelet[2648]: I0319 11:33:13.229853 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d592b675-3c07-4642-8654-bc5df7f4807e-xtables-lock\") pod \"kube-proxy-snqt5\" (UID: \"d592b675-3c07-4642-8654-bc5df7f4807e\") " pod="kube-system/kube-proxy-snqt5" Mar 19 11:33:13.230075 kubelet[2648]: I0319 11:33:13.229976 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcgmk\" (UniqueName: \"kubernetes.io/projected/d592b675-3c07-4642-8654-bc5df7f4807e-kube-api-access-rcgmk\") pod \"kube-proxy-snqt5\" (UID: \"d592b675-3c07-4642-8654-bc5df7f4807e\") " pod="kube-system/kube-proxy-snqt5" Mar 19 11:33:13.230075 kubelet[2648]: I0319 11:33:13.229997 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d592b675-3c07-4642-8654-bc5df7f4807e-kube-proxy\") pod \"kube-proxy-snqt5\" (UID: \"d592b675-3c07-4642-8654-bc5df7f4807e\") " pod="kube-system/kube-proxy-snqt5" Mar 19 11:33:13.230195 kubelet[2648]: I0319 11:33:13.230163 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d592b675-3c07-4642-8654-bc5df7f4807e-lib-modules\") pod \"kube-proxy-snqt5\" (UID: \"d592b675-3c07-4642-8654-bc5df7f4807e\") " pod="kube-system/kube-proxy-snqt5" Mar 19 11:33:13.309092 kubelet[2648]: I0319 11:33:13.308756 2648 topology_manager.go:215] "Topology Admit Handler" podUID="c3353fda-6dd2-4d0a-9246-c349cb61bec4" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-2jsp6" Mar 19 11:33:13.318044 systemd[1]: Created slice kubepods-besteffort-podc3353fda_6dd2_4d0a_9246_c349cb61bec4.slice - libcontainer container kubepods-besteffort-podc3353fda_6dd2_4d0a_9246_c349cb61bec4.slice. Mar 19 11:33:13.331014 kubelet[2648]: I0319 11:33:13.330974 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3353fda-6dd2-4d0a-9246-c349cb61bec4-var-lib-calico\") pod \"tigera-operator-6479d6dc54-2jsp6\" (UID: \"c3353fda-6dd2-4d0a-9246-c349cb61bec4\") " pod="tigera-operator/tigera-operator-6479d6dc54-2jsp6" Mar 19 11:33:13.331014 kubelet[2648]: I0319 11:33:13.331015 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64tc\" (UniqueName: \"kubernetes.io/projected/c3353fda-6dd2-4d0a-9246-c349cb61bec4-kube-api-access-x64tc\") pod \"tigera-operator-6479d6dc54-2jsp6\" (UID: \"c3353fda-6dd2-4d0a-9246-c349cb61bec4\") " pod="tigera-operator/tigera-operator-6479d6dc54-2jsp6" Mar 19 11:33:13.455573 kubelet[2648]: E0319 11:33:13.455529 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:13.522816 kubelet[2648]: E0319 11:33:13.522768 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:13.525036 containerd[1461]: time="2025-03-19T11:33:13.524985309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snqt5,Uid:d592b675-3c07-4642-8654-bc5df7f4807e,Namespace:kube-system,Attempt:0,}" Mar 19 11:33:13.543043 containerd[1461]: time="2025-03-19T11:33:13.542954671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:13.543043 containerd[1461]: time="2025-03-19T11:33:13.543012364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:13.543043 containerd[1461]: time="2025-03-19T11:33:13.543026847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:13.543248 containerd[1461]: time="2025-03-19T11:33:13.543106584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:13.562931 systemd[1]: Started cri-containerd-f35687656d4755ffa8ef363eabd959f1eb75ff833013049fe979b80b73913b18.scope - libcontainer container f35687656d4755ffa8ef363eabd959f1eb75ff833013049fe979b80b73913b18. Mar 19 11:33:13.579879 containerd[1461]: time="2025-03-19T11:33:13.579835999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snqt5,Uid:d592b675-3c07-4642-8654-bc5df7f4807e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f35687656d4755ffa8ef363eabd959f1eb75ff833013049fe979b80b73913b18\"" Mar 19 11:33:13.584172 kubelet[2648]: E0319 11:33:13.584134 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:13.588226 containerd[1461]: time="2025-03-19T11:33:13.588184003Z" level=info msg="CreateContainer within sandbox \"f35687656d4755ffa8ef363eabd959f1eb75ff833013049fe979b80b73913b18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:33:13.603638 containerd[1461]: time="2025-03-19T11:33:13.603554203Z" level=info msg="CreateContainer within sandbox \"f35687656d4755ffa8ef363eabd959f1eb75ff833013049fe979b80b73913b18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ec61562356002c21b6ee8241dee98f22ce037d2b4ae32722d87893098942fc9\"" Mar 19 11:33:13.606919 containerd[1461]: time="2025-03-19T11:33:13.606872680Z" level=info msg="StartContainer for \"1ec61562356002c21b6ee8241dee98f22ce037d2b4ae32722d87893098942fc9\"" Mar 19 11:33:13.620504 containerd[1461]: time="2025-03-19T11:33:13.620141907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-2jsp6,Uid:c3353fda-6dd2-4d0a-9246-c349cb61bec4,Namespace:tigera-operator,Attempt:0,}" Mar 19 11:33:13.632852 systemd[1]: Started cri-containerd-1ec61562356002c21b6ee8241dee98f22ce037d2b4ae32722d87893098942fc9.scope - libcontainer container 1ec61562356002c21b6ee8241dee98f22ce037d2b4ae32722d87893098942fc9. Mar 19 11:33:13.640497 containerd[1461]: time="2025-03-19T11:33:13.640409886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:13.640497 containerd[1461]: time="2025-03-19T11:33:13.640469418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:13.640711 containerd[1461]: time="2025-03-19T11:33:13.640661700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:13.640896 containerd[1461]: time="2025-03-19T11:33:13.640850341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:13.661587 systemd[1]: Started cri-containerd-aa4869455f59aa141b2c8d89ddd1bb5dce3a2da593cc8202297823b7e3f49624.scope - libcontainer container aa4869455f59aa141b2c8d89ddd1bb5dce3a2da593cc8202297823b7e3f49624. Mar 19 11:33:13.664259 containerd[1461]: time="2025-03-19T11:33:13.664226071Z" level=info msg="StartContainer for \"1ec61562356002c21b6ee8241dee98f22ce037d2b4ae32722d87893098942fc9\" returns successfully" Mar 19 11:33:13.704687 containerd[1461]: time="2025-03-19T11:33:13.704581989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-2jsp6,Uid:c3353fda-6dd2-4d0a-9246-c349cb61bec4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aa4869455f59aa141b2c8d89ddd1bb5dce3a2da593cc8202297823b7e3f49624\"" Mar 19 11:33:13.707124 containerd[1461]: time="2025-03-19T11:33:13.707098373Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 19 11:33:14.004802 kubelet[2648]: E0319 11:33:14.004679 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:15.699750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328901543.mount: Deactivated successfully. Mar 19 11:33:17.972870 containerd[1461]: time="2025-03-19T11:33:17.972821195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:17.973343 containerd[1461]: time="2025-03-19T11:33:17.973244150Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 19 11:33:17.974072 containerd[1461]: time="2025-03-19T11:33:17.974043453Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:17.976181 containerd[1461]: time="2025-03-19T11:33:17.976131266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:17.977171 containerd[1461]: time="2025-03-19T11:33:17.977041508Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 4.269911489s" Mar 19 11:33:17.977171 containerd[1461]: time="2025-03-19T11:33:17.977077594Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 19 11:33:17.987518 containerd[1461]: time="2025-03-19T11:33:17.987488573Z" level=info msg="CreateContainer within sandbox \"aa4869455f59aa141b2c8d89ddd1bb5dce3a2da593cc8202297823b7e3f49624\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 19 11:33:18.000785 containerd[1461]: time="2025-03-19T11:33:18.000751340Z" level=info msg="CreateContainer within sandbox \"aa4869455f59aa141b2c8d89ddd1bb5dce3a2da593cc8202297823b7e3f49624\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"47856e097496268117be83abac81f808aed6922a66ce6867afeccc379ad4f7dd\"" Mar 19 11:33:18.001322 containerd[1461]: time="2025-03-19T11:33:18.001269272Z" level=info msg="StartContainer for \"47856e097496268117be83abac81f808aed6922a66ce6867afeccc379ad4f7dd\"" Mar 19 11:33:18.019614 systemd[1]: run-containerd-runc-k8s.io-47856e097496268117be83abac81f808aed6922a66ce6867afeccc379ad4f7dd-runc.StMVxN.mount: Deactivated successfully. Mar 19 11:33:18.027844 systemd[1]: Started cri-containerd-47856e097496268117be83abac81f808aed6922a66ce6867afeccc379ad4f7dd.scope - libcontainer container 47856e097496268117be83abac81f808aed6922a66ce6867afeccc379ad4f7dd. Mar 19 11:33:18.049583 containerd[1461]: time="2025-03-19T11:33:18.049526904Z" level=info msg="StartContainer for \"47856e097496268117be83abac81f808aed6922a66ce6867afeccc379ad4f7dd\" returns successfully" Mar 19 11:33:19.058436 kubelet[2648]: I0319 11:33:19.058146 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-snqt5" podStartSLOduration=6.058128301 podStartE2EDuration="6.058128301s" podCreationTimestamp="2025-03-19 11:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:33:14.017774485 +0000 UTC m=+15.165025124" watchObservedRunningTime="2025-03-19 11:33:19.058128301 +0000 UTC m=+20.205378900" Mar 19 11:33:19.058436 kubelet[2648]: I0319 11:33:19.058275 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-2jsp6" podStartSLOduration=1.779175109 podStartE2EDuration="6.058269484s" podCreationTimestamp="2025-03-19 11:33:13 +0000 UTC" firstStartedPulling="2025-03-19 11:33:13.705852904 +0000 UTC m=+14.853103503" lastFinishedPulling="2025-03-19 11:33:17.984947319 +0000 UTC m=+19.132197878" observedRunningTime="2025-03-19 11:33:19.057551647 +0000 UTC m=+20.204802286" watchObservedRunningTime="2025-03-19 11:33:19.058269484 +0000 UTC m=+20.205520043" Mar 19 11:33:21.255582 kubelet[2648]: I0319 11:33:21.255542 2648 topology_manager.go:215] "Topology Admit Handler" podUID="2c03d738-9f23-4096-86a3-05bb7d11cd33" podNamespace="calico-system" podName="calico-typha-865495c766-wmgwc" Mar 19 11:33:21.268501 systemd[1]: Created slice kubepods-besteffort-pod2c03d738_9f23_4096_86a3_05bb7d11cd33.slice - libcontainer container kubepods-besteffort-pod2c03d738_9f23_4096_86a3_05bb7d11cd33.slice. Mar 19 11:33:21.285370 kubelet[2648]: I0319 11:33:21.285262 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c03d738-9f23-4096-86a3-05bb7d11cd33-typha-certs\") pod \"calico-typha-865495c766-wmgwc\" (UID: \"2c03d738-9f23-4096-86a3-05bb7d11cd33\") " pod="calico-system/calico-typha-865495c766-wmgwc" Mar 19 11:33:21.285900 kubelet[2648]: I0319 11:33:21.285878 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c03d738-9f23-4096-86a3-05bb7d11cd33-tigera-ca-bundle\") pod \"calico-typha-865495c766-wmgwc\" (UID: \"2c03d738-9f23-4096-86a3-05bb7d11cd33\") " pod="calico-system/calico-typha-865495c766-wmgwc" Mar 19 11:33:21.285960 kubelet[2648]: I0319 11:33:21.285919 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghr6\" (UniqueName: \"kubernetes.io/projected/2c03d738-9f23-4096-86a3-05bb7d11cd33-kube-api-access-dghr6\") pod \"calico-typha-865495c766-wmgwc\" (UID: \"2c03d738-9f23-4096-86a3-05bb7d11cd33\") " pod="calico-system/calico-typha-865495c766-wmgwc" Mar 19 11:33:21.314509 kubelet[2648]: I0319 11:33:21.312907 2648 topology_manager.go:215] "Topology Admit Handler" podUID="b8418314-f551-4549-8e9b-a4ccc369850a" podNamespace="calico-system" podName="calico-node-pb47d" Mar 19 11:33:21.322161 systemd[1]: Created slice kubepods-besteffort-podb8418314_f551_4549_8e9b_a4ccc369850a.slice - libcontainer container kubepods-besteffort-podb8418314_f551_4549_8e9b_a4ccc369850a.slice. Mar 19 11:33:21.386669 kubelet[2648]: I0319 11:33:21.386548 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-xtables-lock\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.386669 kubelet[2648]: I0319 11:33:21.386589 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-var-lib-calico\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.386669 kubelet[2648]: I0319 11:33:21.386607 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-cni-net-dir\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.386669 kubelet[2648]: I0319 11:33:21.386629 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcsr\" (UniqueName: \"kubernetes.io/projected/b8418314-f551-4549-8e9b-a4ccc369850a-kube-api-access-2tcsr\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.386669 kubelet[2648]: I0319 11:33:21.386649 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8418314-f551-4549-8e9b-a4ccc369850a-tigera-ca-bundle\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387361 kubelet[2648]: I0319 11:33:21.386672 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-flexvol-driver-host\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387361 kubelet[2648]: I0319 11:33:21.386853 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-lib-modules\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387361 kubelet[2648]: I0319 11:33:21.386874 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b8418314-f551-4549-8e9b-a4ccc369850a-node-certs\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387361 kubelet[2648]: I0319 11:33:21.386908 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-var-run-calico\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387361 kubelet[2648]: I0319 11:33:21.386946 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-cni-log-dir\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387470 kubelet[2648]: I0319 11:33:21.387369 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-policysync\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.387470 kubelet[2648]: I0319 11:33:21.387395 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b8418314-f551-4549-8e9b-a4ccc369850a-cni-bin-dir\") pod \"calico-node-pb47d\" (UID: \"b8418314-f551-4549-8e9b-a4ccc369850a\") " pod="calico-system/calico-node-pb47d" Mar 19 11:33:21.423979 kubelet[2648]: I0319 11:33:21.423919 2648 topology_manager.go:215] "Topology Admit Handler" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" podNamespace="calico-system" podName="csi-node-driver-zqr5d" Mar 19 11:33:21.425524 kubelet[2648]: E0319 11:33:21.425479 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:21.488560 kubelet[2648]: I0319 11:33:21.488516 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnwf5\" (UniqueName: \"kubernetes.io/projected/bbfe9adc-4e2f-44ac-a3f4-b25842fbe645-kube-api-access-dnwf5\") pod \"csi-node-driver-zqr5d\" (UID: \"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645\") " pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:21.488560 kubelet[2648]: I0319 11:33:21.488561 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bbfe9adc-4e2f-44ac-a3f4-b25842fbe645-registration-dir\") pod \"csi-node-driver-zqr5d\" (UID: \"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645\") " pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:21.488717 kubelet[2648]: I0319 11:33:21.488594 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbfe9adc-4e2f-44ac-a3f4-b25842fbe645-kubelet-dir\") pod \"csi-node-driver-zqr5d\" (UID: \"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645\") " pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:21.488717 kubelet[2648]: I0319 11:33:21.488688 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bbfe9adc-4e2f-44ac-a3f4-b25842fbe645-socket-dir\") pod \"csi-node-driver-zqr5d\" (UID: \"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645\") " pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:21.488769 kubelet[2648]: I0319 11:33:21.488734 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bbfe9adc-4e2f-44ac-a3f4-b25842fbe645-varrun\") pod \"csi-node-driver-zqr5d\" (UID: \"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645\") " pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:21.490478 kubelet[2648]: E0319 11:33:21.490410 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.490478 kubelet[2648]: W0319 11:33:21.490426 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.490478 kubelet[2648]: E0319 11:33:21.490442 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.494462 kubelet[2648]: E0319 11:33:21.494439 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.494462 kubelet[2648]: W0319 11:33:21.494454 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.494570 kubelet[2648]: E0319 11:33:21.494467 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.504929 kubelet[2648]: E0319 11:33:21.504911 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.504929 kubelet[2648]: W0319 11:33:21.504926 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.505035 kubelet[2648]: E0319 11:33:21.504940 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.573927 kubelet[2648]: E0319 11:33:21.573840 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:21.574342 containerd[1461]: time="2025-03-19T11:33:21.574311108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-865495c766-wmgwc,Uid:2c03d738-9f23-4096-86a3-05bb7d11cd33,Namespace:calico-system,Attempt:0,}" Mar 19 11:33:21.589444 kubelet[2648]: E0319 11:33:21.589411 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.589444 kubelet[2648]: W0319 11:33:21.589433 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.589562 kubelet[2648]: E0319 11:33:21.589453 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.589730 kubelet[2648]: E0319 11:33:21.589693 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.589730 kubelet[2648]: W0319 11:33:21.589727 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.589788 kubelet[2648]: E0319 11:33:21.589742 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.589945 kubelet[2648]: E0319 11:33:21.589930 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.589945 kubelet[2648]: W0319 11:33:21.589941 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.590017 kubelet[2648]: E0319 11:33:21.589954 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.590199 kubelet[2648]: E0319 11:33:21.590171 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.590199 kubelet[2648]: W0319 11:33:21.590184 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.590199 kubelet[2648]: E0319 11:33:21.590196 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.590383 kubelet[2648]: E0319 11:33:21.590370 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.590383 kubelet[2648]: W0319 11:33:21.590379 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.590442 kubelet[2648]: E0319 11:33:21.590392 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.590595 kubelet[2648]: E0319 11:33:21.590582 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.590595 kubelet[2648]: W0319 11:33:21.590592 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.590654 kubelet[2648]: E0319 11:33:21.590605 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.590771 kubelet[2648]: E0319 11:33:21.590760 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.590805 kubelet[2648]: W0319 11:33:21.590791 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.590844 kubelet[2648]: E0319 11:33:21.590818 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.590951 kubelet[2648]: E0319 11:33:21.590941 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.590974 kubelet[2648]: W0319 11:33:21.590951 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.590974 kubelet[2648]: E0319 11:33:21.590970 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.591117 kubelet[2648]: E0319 11:33:21.591107 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.591117 kubelet[2648]: W0319 11:33:21.591116 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.591161 kubelet[2648]: E0319 11:33:21.591137 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.591260 kubelet[2648]: E0319 11:33:21.591247 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.591283 kubelet[2648]: W0319 11:33:21.591260 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.591283 kubelet[2648]: E0319 11:33:21.591277 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.591403 kubelet[2648]: E0319 11:33:21.591390 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.591403 kubelet[2648]: W0319 11:33:21.591402 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.591445 kubelet[2648]: E0319 11:33:21.591415 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.591567 kubelet[2648]: E0319 11:33:21.591557 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.591589 kubelet[2648]: W0319 11:33:21.591567 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.591589 kubelet[2648]: E0319 11:33:21.591579 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.591740 kubelet[2648]: E0319 11:33:21.591730 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.591767 kubelet[2648]: W0319 11:33:21.591740 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.591767 kubelet[2648]: E0319 11:33:21.591751 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.591973 kubelet[2648]: E0319 11:33:21.591960 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.591973 kubelet[2648]: W0319 11:33:21.591970 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592039 kubelet[2648]: E0319 11:33:21.591982 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.592157 kubelet[2648]: E0319 11:33:21.592145 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.592182 kubelet[2648]: W0319 11:33:21.592158 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592182 kubelet[2648]: E0319 11:33:21.592170 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.592345 kubelet[2648]: E0319 11:33:21.592330 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.592365 kubelet[2648]: W0319 11:33:21.592345 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592387 kubelet[2648]: E0319 11:33:21.592366 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.592501 kubelet[2648]: E0319 11:33:21.592491 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.592521 kubelet[2648]: W0319 11:33:21.592500 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592543 kubelet[2648]: E0319 11:33:21.592520 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.592646 kubelet[2648]: E0319 11:33:21.592637 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.592668 kubelet[2648]: W0319 11:33:21.592646 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592690 kubelet[2648]: E0319 11:33:21.592665 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.592807 kubelet[2648]: E0319 11:33:21.592796 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.592807 kubelet[2648]: W0319 11:33:21.592806 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592851 kubelet[2648]: E0319 11:33:21.592825 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.592952 kubelet[2648]: E0319 11:33:21.592941 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.592972 kubelet[2648]: W0319 11:33:21.592951 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.592972 kubelet[2648]: E0319 11:33:21.592964 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.593246 kubelet[2648]: E0319 11:33:21.593221 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.593246 kubelet[2648]: W0319 11:33:21.593237 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.593297 kubelet[2648]: E0319 11:33:21.593252 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.593440 kubelet[2648]: E0319 11:33:21.593427 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.593462 kubelet[2648]: W0319 11:33:21.593440 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.593462 kubelet[2648]: E0319 11:33:21.593454 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.593620 kubelet[2648]: E0319 11:33:21.593609 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.593643 kubelet[2648]: W0319 11:33:21.593621 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.593643 kubelet[2648]: E0319 11:33:21.593629 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.593813 kubelet[2648]: E0319 11:33:21.593802 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.593834 kubelet[2648]: W0319 11:33:21.593815 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.593834 kubelet[2648]: E0319 11:33:21.593824 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.603627 kubelet[2648]: E0319 11:33:21.603565 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.603627 kubelet[2648]: W0319 11:33:21.603584 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.603627 kubelet[2648]: E0319 11:33:21.603597 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.621193 kubelet[2648]: E0319 11:33:21.621175 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:21.621193 kubelet[2648]: W0319 11:33:21.621190 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:21.621293 kubelet[2648]: E0319 11:33:21.621203 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:21.625889 kubelet[2648]: E0319 11:33:21.625854 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:21.626305 containerd[1461]: time="2025-03-19T11:33:21.626267233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pb47d,Uid:b8418314-f551-4549-8e9b-a4ccc369850a,Namespace:calico-system,Attempt:0,}" Mar 19 11:33:21.642013 containerd[1461]: time="2025-03-19T11:33:21.640960590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:21.642013 containerd[1461]: time="2025-03-19T11:33:21.641029280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:21.642013 containerd[1461]: time="2025-03-19T11:33:21.641043722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:21.642013 containerd[1461]: time="2025-03-19T11:33:21.641115813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:21.658880 systemd[1]: Started cri-containerd-2029e1c6d4cc0c8cb5aa2accf52e03763af7bee38d85cea9405859debaabff1d.scope - libcontainer container 2029e1c6d4cc0c8cb5aa2accf52e03763af7bee38d85cea9405859debaabff1d. Mar 19 11:33:21.667120 containerd[1461]: time="2025-03-19T11:33:21.667020045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:21.667207 containerd[1461]: time="2025-03-19T11:33:21.667094496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:21.667207 containerd[1461]: time="2025-03-19T11:33:21.667134342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:21.667265 containerd[1461]: time="2025-03-19T11:33:21.667228436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:21.694845 systemd[1]: Started cri-containerd-db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8.scope - libcontainer container db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8. Mar 19 11:33:21.703218 containerd[1461]: time="2025-03-19T11:33:21.703134763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-865495c766-wmgwc,Uid:2c03d738-9f23-4096-86a3-05bb7d11cd33,Namespace:calico-system,Attempt:0,} returns sandbox id \"2029e1c6d4cc0c8cb5aa2accf52e03763af7bee38d85cea9405859debaabff1d\"" Mar 19 11:33:21.703844 kubelet[2648]: E0319 11:33:21.703822 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:21.705165 containerd[1461]: time="2025-03-19T11:33:21.705141983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 19 11:33:21.715913 containerd[1461]: time="2025-03-19T11:33:21.715861945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pb47d,Uid:b8418314-f551-4549-8e9b-a4ccc369850a,Namespace:calico-system,Attempt:0,} returns sandbox id \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\"" Mar 19 11:33:21.716445 kubelet[2648]: E0319 11:33:21.716427 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:22.958258 kubelet[2648]: E0319 11:33:22.958196 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:23.269952 containerd[1461]: time="2025-03-19T11:33:23.269823044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:23.270585 containerd[1461]: time="2025-03-19T11:33:23.270543464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 19 11:33:23.271386 containerd[1461]: time="2025-03-19T11:33:23.271359056Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:23.273692 containerd[1461]: time="2025-03-19T11:33:23.273634769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:23.274312 containerd[1461]: time="2025-03-19T11:33:23.274274737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.56910199s" Mar 19 11:33:23.274312 containerd[1461]: time="2025-03-19T11:33:23.274309462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 19 11:33:23.275980 containerd[1461]: time="2025-03-19T11:33:23.275828470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 19 11:33:23.288626 containerd[1461]: time="2025-03-19T11:33:23.288053872Z" level=info msg="CreateContainer within sandbox \"2029e1c6d4cc0c8cb5aa2accf52e03763af7bee38d85cea9405859debaabff1d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 19 11:33:23.304433 containerd[1461]: time="2025-03-19T11:33:23.304337472Z" level=info msg="CreateContainer within sandbox \"2029e1c6d4cc0c8cb5aa2accf52e03763af7bee38d85cea9405859debaabff1d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8ec05c2fdf0be8a46baaf71b1216c2a7b940ee0b6e8c472123c846cec4c678f1\"" Mar 19 11:33:23.306692 containerd[1461]: time="2025-03-19T11:33:23.306628267Z" level=info msg="StartContainer for \"8ec05c2fdf0be8a46baaf71b1216c2a7b940ee0b6e8c472123c846cec4c678f1\"" Mar 19 11:33:23.337474 systemd[1]: Started cri-containerd-8ec05c2fdf0be8a46baaf71b1216c2a7b940ee0b6e8c472123c846cec4c678f1.scope - libcontainer container 8ec05c2fdf0be8a46baaf71b1216c2a7b940ee0b6e8c472123c846cec4c678f1. Mar 19 11:33:23.368815 containerd[1461]: time="2025-03-19T11:33:23.368766053Z" level=info msg="StartContainer for \"8ec05c2fdf0be8a46baaf71b1216c2a7b940ee0b6e8c472123c846cec4c678f1\" returns successfully" Mar 19 11:33:24.030999 kubelet[2648]: E0319 11:33:24.030945 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:24.042011 kubelet[2648]: I0319 11:33:24.041949 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-865495c766-wmgwc" podStartSLOduration=1.471171792 podStartE2EDuration="3.041933536s" podCreationTimestamp="2025-03-19 11:33:21 +0000 UTC" firstStartedPulling="2025-03-19 11:33:21.704633867 +0000 UTC m=+22.851884466" lastFinishedPulling="2025-03-19 11:33:23.275395651 +0000 UTC m=+24.422646210" observedRunningTime="2025-03-19 11:33:24.041596972 +0000 UTC m=+25.188847571" watchObservedRunningTime="2025-03-19 11:33:24.041933536 +0000 UTC m=+25.189184135" Mar 19 11:33:24.093416 kubelet[2648]: E0319 11:33:24.093338 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.093416 kubelet[2648]: W0319 11:33:24.093366 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.093416 kubelet[2648]: E0319 11:33:24.093390 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.093641 kubelet[2648]: E0319 11:33:24.093617 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.093641 kubelet[2648]: W0319 11:33:24.093629 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.093641 kubelet[2648]: E0319 11:33:24.093641 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.093868 kubelet[2648]: E0319 11:33:24.093842 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.093868 kubelet[2648]: W0319 11:33:24.093855 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.093868 kubelet[2648]: E0319 11:33:24.093864 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.094086 kubelet[2648]: E0319 11:33:24.094065 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.094086 kubelet[2648]: W0319 11:33:24.094078 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.094086 kubelet[2648]: E0319 11:33:24.094087 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.094263 kubelet[2648]: E0319 11:33:24.094246 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.094263 kubelet[2648]: W0319 11:33:24.094257 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.094313 kubelet[2648]: E0319 11:33:24.094265 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.094414 kubelet[2648]: E0319 11:33:24.094402 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.094414 kubelet[2648]: W0319 11:33:24.094412 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.094460 kubelet[2648]: E0319 11:33:24.094420 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.094553 kubelet[2648]: E0319 11:33:24.094543 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.094553 kubelet[2648]: W0319 11:33:24.094552 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.094604 kubelet[2648]: E0319 11:33:24.094559 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.094688 kubelet[2648]: E0319 11:33:24.094679 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.094688 kubelet[2648]: W0319 11:33:24.094687 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.094748 kubelet[2648]: E0319 11:33:24.094705 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.094881 kubelet[2648]: E0319 11:33:24.094859 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.094881 kubelet[2648]: W0319 11:33:24.094870 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.094881 kubelet[2648]: E0319 11:33:24.094878 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.095009 kubelet[2648]: E0319 11:33:24.094999 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.095009 kubelet[2648]: W0319 11:33:24.095008 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.095055 kubelet[2648]: E0319 11:33:24.095017 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.095145 kubelet[2648]: E0319 11:33:24.095136 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.095168 kubelet[2648]: W0319 11:33:24.095144 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.095168 kubelet[2648]: E0319 11:33:24.095151 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.095279 kubelet[2648]: E0319 11:33:24.095269 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.095302 kubelet[2648]: W0319 11:33:24.095279 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.095302 kubelet[2648]: E0319 11:33:24.095287 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.095437 kubelet[2648]: E0319 11:33:24.095426 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.095437 kubelet[2648]: W0319 11:33:24.095435 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.095482 kubelet[2648]: E0319 11:33:24.095443 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.095581 kubelet[2648]: E0319 11:33:24.095569 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.095581 kubelet[2648]: W0319 11:33:24.095578 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.095632 kubelet[2648]: E0319 11:33:24.095585 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.095727 kubelet[2648]: E0319 11:33:24.095716 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.095727 kubelet[2648]: W0319 11:33:24.095727 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.095775 kubelet[2648]: E0319 11:33:24.095734 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.106147 kubelet[2648]: E0319 11:33:24.106118 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.106147 kubelet[2648]: W0319 11:33:24.106139 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.106216 kubelet[2648]: E0319 11:33:24.106153 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.106377 kubelet[2648]: E0319 11:33:24.106353 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.106377 kubelet[2648]: W0319 11:33:24.106366 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.106427 kubelet[2648]: E0319 11:33:24.106382 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.106585 kubelet[2648]: E0319 11:33:24.106565 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.106585 kubelet[2648]: W0319 11:33:24.106578 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.106633 kubelet[2648]: E0319 11:33:24.106591 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.106830 kubelet[2648]: E0319 11:33:24.106810 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.106830 kubelet[2648]: W0319 11:33:24.106822 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.106879 kubelet[2648]: E0319 11:33:24.106844 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.107019 kubelet[2648]: E0319 11:33:24.106999 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.107019 kubelet[2648]: W0319 11:33:24.107010 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.107068 kubelet[2648]: E0319 11:33:24.107024 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.107242 kubelet[2648]: E0319 11:33:24.107221 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.107242 kubelet[2648]: W0319 11:33:24.107233 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.107290 kubelet[2648]: E0319 11:33:24.107244 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.107422 kubelet[2648]: E0319 11:33:24.107410 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.107448 kubelet[2648]: W0319 11:33:24.107422 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.107476 kubelet[2648]: E0319 11:33:24.107445 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.107565 kubelet[2648]: E0319 11:33:24.107554 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.107586 kubelet[2648]: W0319 11:33:24.107564 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.107637 kubelet[2648]: E0319 11:33:24.107623 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.107737 kubelet[2648]: E0319 11:33:24.107724 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.107763 kubelet[2648]: W0319 11:33:24.107737 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.107763 kubelet[2648]: E0319 11:33:24.107750 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.107899 kubelet[2648]: E0319 11:33:24.107888 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.107924 kubelet[2648]: W0319 11:33:24.107899 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.107924 kubelet[2648]: E0319 11:33:24.107911 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.108058 kubelet[2648]: E0319 11:33:24.108045 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.108079 kubelet[2648]: W0319 11:33:24.108058 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.108079 kubelet[2648]: E0319 11:33:24.108069 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.108251 kubelet[2648]: E0319 11:33:24.108240 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.108274 kubelet[2648]: W0319 11:33:24.108251 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.108274 kubelet[2648]: E0319 11:33:24.108263 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.108550 kubelet[2648]: E0319 11:33:24.108530 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.108550 kubelet[2648]: W0319 11:33:24.108546 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.108613 kubelet[2648]: E0319 11:33:24.108562 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.108755 kubelet[2648]: E0319 11:33:24.108743 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.108755 kubelet[2648]: W0319 11:33:24.108752 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.108802 kubelet[2648]: E0319 11:33:24.108775 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.108922 kubelet[2648]: E0319 11:33:24.108910 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.108922 kubelet[2648]: W0319 11:33:24.108919 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.108975 kubelet[2648]: E0319 11:33:24.108937 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.109086 kubelet[2648]: E0319 11:33:24.109076 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.109109 kubelet[2648]: W0319 11:33:24.109085 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.109109 kubelet[2648]: E0319 11:33:24.109098 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.109252 kubelet[2648]: E0319 11:33:24.109242 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.109272 kubelet[2648]: W0319 11:33:24.109252 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.109272 kubelet[2648]: E0319 11:33:24.109260 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.109565 kubelet[2648]: E0319 11:33:24.109551 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 19 11:33:24.109565 kubelet[2648]: W0319 11:33:24.109563 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 19 11:33:24.109613 kubelet[2648]: E0319 11:33:24.109573 2648 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 19 11:33:24.420903 containerd[1461]: time="2025-03-19T11:33:24.420850161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:24.421385 containerd[1461]: time="2025-03-19T11:33:24.421319664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 19 11:33:24.422257 containerd[1461]: time="2025-03-19T11:33:24.422223303Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:24.424394 containerd[1461]: time="2025-03-19T11:33:24.424348504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:24.425061 containerd[1461]: time="2025-03-19T11:33:24.425028474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.149166078s" Mar 19 11:33:24.425100 containerd[1461]: time="2025-03-19T11:33:24.425064998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 19 11:33:24.427391 containerd[1461]: time="2025-03-19T11:33:24.427265089Z" level=info msg="CreateContainer within sandbox \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 19 11:33:24.450103 containerd[1461]: time="2025-03-19T11:33:24.450049660Z" level=info msg="CreateContainer within sandbox \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4\"" Mar 19 11:33:24.450684 containerd[1461]: time="2025-03-19T11:33:24.450659220Z" level=info msg="StartContainer for \"e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4\"" Mar 19 11:33:24.480920 systemd[1]: Started cri-containerd-e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4.scope - libcontainer container e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4. Mar 19 11:33:24.540485 containerd[1461]: time="2025-03-19T11:33:24.540437522Z" level=info msg="StartContainer for \"e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4\" returns successfully" Mar 19 11:33:24.554322 systemd[1]: cri-containerd-e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4.scope: Deactivated successfully. Mar 19 11:33:24.573199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4-rootfs.mount: Deactivated successfully. Mar 19 11:33:24.619158 containerd[1461]: time="2025-03-19T11:33:24.613897188Z" level=info msg="shim disconnected" id=e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4 namespace=k8s.io Mar 19 11:33:24.619158 containerd[1461]: time="2025-03-19T11:33:24.619015224Z" level=warning msg="cleaning up after shim disconnected" id=e034620f7b113c9c370176be83fabc3d3253a22d5a1ce9ecb7f4b3a215467ee4 namespace=k8s.io Mar 19 11:33:24.619158 containerd[1461]: time="2025-03-19T11:33:24.619032547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:33:24.958334 kubelet[2648]: E0319 11:33:24.957975 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:25.034160 kubelet[2648]: E0319 11:33:25.034115 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:25.035051 kubelet[2648]: I0319 11:33:25.034732 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:25.035820 kubelet[2648]: E0319 11:33:25.035793 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:25.036437 containerd[1461]: time="2025-03-19T11:33:25.036395432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 19 11:33:26.958197 kubelet[2648]: E0319 11:33:26.957834 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:27.466537 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:44322.service - OpenSSH per-connection server daemon (10.0.0.1:44322). Mar 19 11:33:27.512224 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 44322 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:27.513492 sshd-session[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:27.519605 systemd-logind[1447]: New session 8 of user core. Mar 19 11:33:27.528889 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:33:27.655344 sshd[3318]: Connection closed by 10.0.0.1 port 44322 Mar 19 11:33:27.655656 sshd-session[3316]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:27.658467 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:44322.service: Deactivated successfully. Mar 19 11:33:27.660135 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:33:27.661614 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:33:27.662593 systemd-logind[1447]: Removed session 8. Mar 19 11:33:28.958166 kubelet[2648]: E0319 11:33:28.958019 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:29.410706 containerd[1461]: time="2025-03-19T11:33:29.410530004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:29.411497 containerd[1461]: time="2025-03-19T11:33:29.411273365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 19 11:33:29.412242 containerd[1461]: time="2025-03-19T11:33:29.412160423Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:29.414231 containerd[1461]: time="2025-03-19T11:33:29.414174804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:29.415170 containerd[1461]: time="2025-03-19T11:33:29.414996614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 4.378558977s" Mar 19 11:33:29.415170 containerd[1461]: time="2025-03-19T11:33:29.415028137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 19 11:33:29.417851 containerd[1461]: time="2025-03-19T11:33:29.417758277Z" level=info msg="CreateContainer within sandbox \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 11:33:29.441423 containerd[1461]: time="2025-03-19T11:33:29.441359307Z" level=info msg="CreateContainer within sandbox \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f\"" Mar 19 11:33:29.441992 containerd[1461]: time="2025-03-19T11:33:29.441863202Z" level=info msg="StartContainer for \"839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f\"" Mar 19 11:33:29.474882 systemd[1]: Started cri-containerd-839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f.scope - libcontainer container 839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f. Mar 19 11:33:29.500867 containerd[1461]: time="2025-03-19T11:33:29.500822672Z" level=info msg="StartContainer for \"839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f\" returns successfully" Mar 19 11:33:30.043112 kubelet[2648]: E0319 11:33:30.043076 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:30.086990 systemd[1]: cri-containerd-839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f.scope: Deactivated successfully. Mar 19 11:33:30.087604 systemd[1]: cri-containerd-839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f.scope: Consumed 456ms CPU time, 158.4M memory peak, 4K read from disk, 150.3M written to disk. Mar 19 11:33:30.103184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f-rootfs.mount: Deactivated successfully. Mar 19 11:33:30.138093 containerd[1461]: time="2025-03-19T11:33:30.138020973Z" level=info msg="shim disconnected" id=839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f namespace=k8s.io Mar 19 11:33:30.138093 containerd[1461]: time="2025-03-19T11:33:30.138084220Z" level=warning msg="cleaning up after shim disconnected" id=839878fff448a39abf16a80918ecf17adaad8725fa40a8f00e16d6f637c0059f namespace=k8s.io Mar 19 11:33:30.138093 containerd[1461]: time="2025-03-19T11:33:30.138096941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:33:30.176549 kubelet[2648]: I0319 11:33:30.176513 2648 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 19 11:33:30.195494 kubelet[2648]: I0319 11:33:30.195402 2648 topology_manager.go:215] "Topology Admit Handler" podUID="07fd37a8-ef23-49a6-a372-10e4ce8f9811" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:30.200467 kubelet[2648]: I0319 11:33:30.198842 2648 topology_manager.go:215] "Topology Admit Handler" podUID="0af97ae0-2493-4bbf-a605-0511940d25f4" podNamespace="calico-system" podName="calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:30.200467 kubelet[2648]: I0319 11:33:30.199203 2648 topology_manager.go:215] "Topology Admit Handler" podUID="498e0970-d5ce-4bd8-8d9d-336f0a003145" podNamespace="calico-apiserver" podName="calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:30.201360 kubelet[2648]: I0319 11:33:30.201333 2648 topology_manager.go:215] "Topology Admit Handler" podUID="547e076a-acd7-4d2f-97da-f3027e556484" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:30.201927 kubelet[2648]: I0319 11:33:30.201574 2648 topology_manager.go:215] "Topology Admit Handler" podUID="9811c23a-6a7b-41b7-8fa0-52983b899281" podNamespace="calico-apiserver" podName="calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:30.207512 systemd[1]: Created slice kubepods-burstable-pod07fd37a8_ef23_49a6_a372_10e4ce8f9811.slice - libcontainer container kubepods-burstable-pod07fd37a8_ef23_49a6_a372_10e4ce8f9811.slice. Mar 19 11:33:30.214651 systemd[1]: Created slice kubepods-besteffort-pod0af97ae0_2493_4bbf_a605_0511940d25f4.slice - libcontainer container kubepods-besteffort-pod0af97ae0_2493_4bbf_a605_0511940d25f4.slice. Mar 19 11:33:30.220085 systemd[1]: Created slice kubepods-besteffort-pod498e0970_d5ce_4bd8_8d9d_336f0a003145.slice - libcontainer container kubepods-besteffort-pod498e0970_d5ce_4bd8_8d9d_336f0a003145.slice. Mar 19 11:33:30.225140 systemd[1]: Created slice kubepods-burstable-pod547e076a_acd7_4d2f_97da_f3027e556484.slice - libcontainer container kubepods-burstable-pod547e076a_acd7_4d2f_97da_f3027e556484.slice. Mar 19 11:33:30.233599 systemd[1]: Created slice kubepods-besteffort-pod9811c23a_6a7b_41b7_8fa0_52983b899281.slice - libcontainer container kubepods-besteffort-pod9811c23a_6a7b_41b7_8fa0_52983b899281.slice. Mar 19 11:33:30.246398 kubelet[2648]: I0319 11:33:30.246364 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9811c23a-6a7b-41b7-8fa0-52983b899281-calico-apiserver-certs\") pod \"calico-apiserver-77c7dddc8f-lckq4\" (UID: \"9811c23a-6a7b-41b7-8fa0-52983b899281\") " pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:30.246398 kubelet[2648]: I0319 11:33:30.246400 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0af97ae0-2493-4bbf-a605-0511940d25f4-tigera-ca-bundle\") pod \"calico-kube-controllers-689bbc887b-2vs8c\" (UID: \"0af97ae0-2493-4bbf-a605-0511940d25f4\") " pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:30.246533 kubelet[2648]: I0319 11:33:30.246425 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwng\" (UniqueName: \"kubernetes.io/projected/0af97ae0-2493-4bbf-a605-0511940d25f4-kube-api-access-smwng\") pod \"calico-kube-controllers-689bbc887b-2vs8c\" (UID: \"0af97ae0-2493-4bbf-a605-0511940d25f4\") " pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:30.246533 kubelet[2648]: I0319 11:33:30.246445 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fd37a8-ef23-49a6-a372-10e4ce8f9811-config-volume\") pod \"coredns-7db6d8ff4d-7zd7r\" (UID: \"07fd37a8-ef23-49a6-a372-10e4ce8f9811\") " pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:30.246533 kubelet[2648]: I0319 11:33:30.246460 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqsq9\" (UniqueName: \"kubernetes.io/projected/07fd37a8-ef23-49a6-a372-10e4ce8f9811-kube-api-access-dqsq9\") pod \"coredns-7db6d8ff4d-7zd7r\" (UID: \"07fd37a8-ef23-49a6-a372-10e4ce8f9811\") " pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:30.246533 kubelet[2648]: I0319 11:33:30.246475 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/498e0970-d5ce-4bd8-8d9d-336f0a003145-calico-apiserver-certs\") pod \"calico-apiserver-77c7dddc8f-6t24c\" (UID: \"498e0970-d5ce-4bd8-8d9d-336f0a003145\") " pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:30.246533 kubelet[2648]: I0319 11:33:30.246490 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wjq\" (UniqueName: \"kubernetes.io/projected/498e0970-d5ce-4bd8-8d9d-336f0a003145-kube-api-access-64wjq\") pod \"calico-apiserver-77c7dddc8f-6t24c\" (UID: \"498e0970-d5ce-4bd8-8d9d-336f0a003145\") " pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:30.246641 kubelet[2648]: I0319 11:33:30.246506 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn754\" (UniqueName: \"kubernetes.io/projected/547e076a-acd7-4d2f-97da-f3027e556484-kube-api-access-nn754\") pod \"coredns-7db6d8ff4d-zlsx7\" (UID: \"547e076a-acd7-4d2f-97da-f3027e556484\") " pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:30.246641 kubelet[2648]: I0319 11:33:30.246522 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hwk8\" (UniqueName: \"kubernetes.io/projected/9811c23a-6a7b-41b7-8fa0-52983b899281-kube-api-access-7hwk8\") pod \"calico-apiserver-77c7dddc8f-lckq4\" (UID: \"9811c23a-6a7b-41b7-8fa0-52983b899281\") " pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:30.246641 kubelet[2648]: I0319 11:33:30.246538 2648 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/547e076a-acd7-4d2f-97da-f3027e556484-config-volume\") pod \"coredns-7db6d8ff4d-zlsx7\" (UID: \"547e076a-acd7-4d2f-97da-f3027e556484\") " pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:30.512566 kubelet[2648]: E0319 11:33:30.512469 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:30.512982 containerd[1461]: time="2025-03-19T11:33:30.512944419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:0,}" Mar 19 11:33:30.519187 containerd[1461]: time="2025-03-19T11:33:30.519137676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:0,}" Mar 19 11:33:30.524232 containerd[1461]: time="2025-03-19T11:33:30.524053398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:0,}" Mar 19 11:33:30.528216 kubelet[2648]: E0319 11:33:30.527912 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:30.528881 containerd[1461]: time="2025-03-19T11:33:30.528350133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:0,}" Mar 19 11:33:30.539634 containerd[1461]: time="2025-03-19T11:33:30.538146372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:0,}" Mar 19 11:33:30.907999 containerd[1461]: time="2025-03-19T11:33:30.907947955Z" level=error msg="Failed to destroy network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.908193 containerd[1461]: time="2025-03-19T11:33:30.908154257Z" level=error msg="Failed to destroy network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.911158 containerd[1461]: time="2025-03-19T11:33:30.910894068Z" level=error msg="encountered an error cleaning up failed sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.911158 containerd[1461]: time="2025-03-19T11:33:30.910966276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.911398 containerd[1461]: time="2025-03-19T11:33:30.911360317Z" level=error msg="Failed to destroy network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912181 containerd[1461]: time="2025-03-19T11:33:30.911603943Z" level=error msg="encountered an error cleaning up failed sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912181 containerd[1461]: time="2025-03-19T11:33:30.911702114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912302 containerd[1461]: time="2025-03-19T11:33:30.912239171Z" level=error msg="encountered an error cleaning up failed sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912328 containerd[1461]: time="2025-03-19T11:33:30.912300337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912630 kubelet[2648]: E0319 11:33:30.912575 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912683 kubelet[2648]: E0319 11:33:30.912626 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912683 kubelet[2648]: E0319 11:33:30.912656 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:30.912683 kubelet[2648]: E0319 11:33:30.912663 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:30.912683 kubelet[2648]: E0319 11:33:30.912676 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:30.912800 kubelet[2648]: E0319 11:33:30.912682 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:30.912800 kubelet[2648]: E0319 11:33:30.912597 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.912800 kubelet[2648]: E0319 11:33:30.912735 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7zd7r" podUID="07fd37a8-ef23-49a6-a372-10e4ce8f9811" Mar 19 11:33:30.912932 kubelet[2648]: E0319 11:33:30.912759 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:30.912932 kubelet[2648]: E0319 11:33:30.912776 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" podUID="498e0970-d5ce-4bd8-8d9d-336f0a003145" Mar 19 11:33:30.912932 kubelet[2648]: E0319 11:33:30.912777 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:30.913012 kubelet[2648]: E0319 11:33:30.912807 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" podUID="0af97ae0-2493-4bbf-a605-0511940d25f4" Mar 19 11:33:30.915308 containerd[1461]: time="2025-03-19T11:33:30.915257331Z" level=error msg="Failed to destroy network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.915607 containerd[1461]: time="2025-03-19T11:33:30.915580245Z" level=error msg="encountered an error cleaning up failed sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.915669 containerd[1461]: time="2025-03-19T11:33:30.915624690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.915960 kubelet[2648]: E0319 11:33:30.915915 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.916017 kubelet[2648]: E0319 11:33:30.915968 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:30.916040 kubelet[2648]: E0319 11:33:30.915988 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:30.916093 kubelet[2648]: E0319 11:33:30.916054 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" podUID="9811c23a-6a7b-41b7-8fa0-52983b899281" Mar 19 11:33:30.921343 containerd[1461]: time="2025-03-19T11:33:30.921287090Z" level=error msg="Failed to destroy network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.921632 containerd[1461]: time="2025-03-19T11:33:30.921604844Z" level=error msg="encountered an error cleaning up failed sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.921678 containerd[1461]: time="2025-03-19T11:33:30.921659010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.921881 kubelet[2648]: E0319 11:33:30.921847 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:30.921939 kubelet[2648]: E0319 11:33:30.921898 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:30.921939 kubelet[2648]: E0319 11:33:30.921916 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:30.921990 kubelet[2648]: E0319 11:33:30.921950 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zlsx7" podUID="547e076a-acd7-4d2f-97da-f3027e556484" Mar 19 11:33:30.963152 systemd[1]: Created slice kubepods-besteffort-podbbfe9adc_4e2f_44ac_a3f4_b25842fbe645.slice - libcontainer container kubepods-besteffort-podbbfe9adc_4e2f_44ac_a3f4_b25842fbe645.slice. Mar 19 11:33:30.965417 containerd[1461]: time="2025-03-19T11:33:30.965080735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:0,}" Mar 19 11:33:31.013782 containerd[1461]: time="2025-03-19T11:33:31.013736131Z" level=error msg="Failed to destroy network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.014042 containerd[1461]: time="2025-03-19T11:33:31.014022240Z" level=error msg="encountered an error cleaning up failed sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.014102 containerd[1461]: time="2025-03-19T11:33:31.014076486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.014668 kubelet[2648]: E0319 11:33:31.014278 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.014668 kubelet[2648]: E0319 11:33:31.014334 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:31.014668 kubelet[2648]: E0319 11:33:31.014352 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:31.014888 kubelet[2648]: E0319 11:33:31.014389 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:31.045415 kubelet[2648]: I0319 11:33:31.045386 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948" Mar 19 11:33:31.046844 containerd[1461]: time="2025-03-19T11:33:31.045981400Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:31.046844 containerd[1461]: time="2025-03-19T11:33:31.046147577Z" level=info msg="Ensure that sandbox 3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948 in task-service has been cleanup successfully" Mar 19 11:33:31.046844 containerd[1461]: time="2025-03-19T11:33:31.046586902Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:31.046977 kubelet[2648]: I0319 11:33:31.046060 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d" Mar 19 11:33:31.047968 containerd[1461]: time="2025-03-19T11:33:31.047275213Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:31.047968 containerd[1461]: time="2025-03-19T11:33:31.047350740Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:31.047968 containerd[1461]: time="2025-03-19T11:33:31.047281053Z" level=info msg="Ensure that sandbox 3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d in task-service has been cleanup successfully" Mar 19 11:33:31.048437 kubelet[2648]: E0319 11:33:31.047615 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:31.048437 kubelet[2648]: I0319 11:33:31.047623 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127" Mar 19 11:33:31.048530 containerd[1461]: time="2025-03-19T11:33:31.048198547Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:31.048530 containerd[1461]: time="2025-03-19T11:33:31.048344362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:1,}" Mar 19 11:33:31.049796 containerd[1461]: time="2025-03-19T11:33:31.049761388Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:31.049796 containerd[1461]: time="2025-03-19T11:33:31.049791231Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:31.050221 containerd[1461]: time="2025-03-19T11:33:31.050190432Z" level=info msg="Ensure that sandbox 710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127 in task-service has been cleanup successfully" Mar 19 11:33:31.051187 containerd[1461]: time="2025-03-19T11:33:31.050938189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:1,}" Mar 19 11:33:31.051385 containerd[1461]: time="2025-03-19T11:33:31.051360472Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:31.051385 containerd[1461]: time="2025-03-19T11:33:31.051383394Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:31.052464 kubelet[2648]: E0319 11:33:31.052381 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:31.055726 kubelet[2648]: E0319 11:33:31.053867 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:31.055816 containerd[1461]: time="2025-03-19T11:33:31.054107474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:1,}" Mar 19 11:33:31.057858 kubelet[2648]: I0319 11:33:31.057826 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc" Mar 19 11:33:31.058160 containerd[1461]: time="2025-03-19T11:33:31.057909824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 19 11:33:31.059128 kubelet[2648]: I0319 11:33:31.059103 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3" Mar 19 11:33:31.059789 containerd[1461]: time="2025-03-19T11:33:31.059570234Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:31.059789 containerd[1461]: time="2025-03-19T11:33:31.059727010Z" level=info msg="Ensure that sandbox d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc in task-service has been cleanup successfully" Mar 19 11:33:31.060314 containerd[1461]: time="2025-03-19T11:33:31.059883267Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:31.060314 containerd[1461]: time="2025-03-19T11:33:31.059897748Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:31.061046 containerd[1461]: time="2025-03-19T11:33:31.061022103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:1,}" Mar 19 11:33:31.061584 containerd[1461]: time="2025-03-19T11:33:31.061154237Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:31.061812 kubelet[2648]: I0319 11:33:31.061777 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7" Mar 19 11:33:31.062231 containerd[1461]: time="2025-03-19T11:33:31.062206105Z" level=info msg="Ensure that sandbox 1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3 in task-service has been cleanup successfully" Mar 19 11:33:31.063060 containerd[1461]: time="2025-03-19T11:33:31.062376482Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:31.063293 containerd[1461]: time="2025-03-19T11:33:31.063275535Z" level=info msg="Ensure that sandbox a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7 in task-service has been cleanup successfully" Mar 19 11:33:31.064359 containerd[1461]: time="2025-03-19T11:33:31.063964325Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:31.064471 containerd[1461]: time="2025-03-19T11:33:31.064451775Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:31.064943 containerd[1461]: time="2025-03-19T11:33:31.064870858Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:31.064943 containerd[1461]: time="2025-03-19T11:33:31.064891060Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:31.067261 containerd[1461]: time="2025-03-19T11:33:31.065336746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:1,}" Mar 19 11:33:31.067916 containerd[1461]: time="2025-03-19T11:33:31.067791638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:1,}" Mar 19 11:33:31.144675 containerd[1461]: time="2025-03-19T11:33:31.144623403Z" level=error msg="Failed to destroy network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.145674 containerd[1461]: time="2025-03-19T11:33:31.144910072Z" level=error msg="encountered an error cleaning up failed sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.145674 containerd[1461]: time="2025-03-19T11:33:31.144974679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.145761 kubelet[2648]: E0319 11:33:31.145208 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.145761 kubelet[2648]: E0319 11:33:31.145261 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:31.145761 kubelet[2648]: E0319 11:33:31.145282 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:31.145857 kubelet[2648]: E0319 11:33:31.145325 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7zd7r" podUID="07fd37a8-ef23-49a6-a372-10e4ce8f9811" Mar 19 11:33:31.162014 containerd[1461]: time="2025-03-19T11:33:31.161912657Z" level=error msg="Failed to destroy network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.163950 containerd[1461]: time="2025-03-19T11:33:31.163918383Z" level=error msg="encountered an error cleaning up failed sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.164095 containerd[1461]: time="2025-03-19T11:33:31.164073919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.164833 kubelet[2648]: E0319 11:33:31.164371 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.164833 kubelet[2648]: E0319 11:33:31.164437 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:31.164833 kubelet[2648]: E0319 11:33:31.164461 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:31.164972 kubelet[2648]: E0319 11:33:31.164498 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" podUID="9811c23a-6a7b-41b7-8fa0-52983b899281" Mar 19 11:33:31.183957 containerd[1461]: time="2025-03-19T11:33:31.183868710Z" level=error msg="Failed to destroy network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.184206 containerd[1461]: time="2025-03-19T11:33:31.184180782Z" level=error msg="Failed to destroy network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.184657 containerd[1461]: time="2025-03-19T11:33:31.184632668Z" level=error msg="encountered an error cleaning up failed sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.185090 containerd[1461]: time="2025-03-19T11:33:31.184738319Z" level=error msg="encountered an error cleaning up failed sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.185148 containerd[1461]: time="2025-03-19T11:33:31.185126079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.185629 containerd[1461]: time="2025-03-19T11:33:31.185063273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.185682 kubelet[2648]: E0319 11:33:31.185462 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.185682 kubelet[2648]: E0319 11:33:31.185462 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.185682 kubelet[2648]: E0319 11:33:31.185521 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:31.185682 kubelet[2648]: E0319 11:33:31.185531 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:31.186012 kubelet[2648]: E0319 11:33:31.185542 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:31.186012 kubelet[2648]: E0319 11:33:31.185548 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:31.186012 kubelet[2648]: E0319 11:33:31.185579 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zlsx7" podUID="547e076a-acd7-4d2f-97da-f3027e556484" Mar 19 11:33:31.186139 kubelet[2648]: E0319 11:33:31.185579 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" podUID="0af97ae0-2493-4bbf-a605-0511940d25f4" Mar 19 11:33:31.196681 containerd[1461]: time="2025-03-19T11:33:31.196638580Z" level=error msg="Failed to destroy network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.197469 containerd[1461]: time="2025-03-19T11:33:31.196943772Z" level=error msg="encountered an error cleaning up failed sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.197469 containerd[1461]: time="2025-03-19T11:33:31.196990576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.197718 kubelet[2648]: E0319 11:33:31.197672 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.197839 kubelet[2648]: E0319 11:33:31.197821 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:31.198405 kubelet[2648]: E0319 11:33:31.197887 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:31.198405 kubelet[2648]: E0319 11:33:31.197934 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" podUID="498e0970-d5ce-4bd8-8d9d-336f0a003145" Mar 19 11:33:31.202269 containerd[1461]: time="2025-03-19T11:33:31.202228074Z" level=error msg="Failed to destroy network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.202507 containerd[1461]: time="2025-03-19T11:33:31.202481220Z" level=error msg="encountered an error cleaning up failed sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.202552 containerd[1461]: time="2025-03-19T11:33:31.202531305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.202746 kubelet[2648]: E0319 11:33:31.202721 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:31.202792 kubelet[2648]: E0319 11:33:31.202760 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:31.202792 kubelet[2648]: E0319 11:33:31.202777 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:31.202843 kubelet[2648]: E0319 11:33:31.202810 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:31.428940 systemd[1]: run-netns-cni\x2d6cf7454a\x2d1577\x2dd5b6\x2d5d29\x2df87beffa63a3.mount: Deactivated successfully. Mar 19 11:33:31.429030 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7-shm.mount: Deactivated successfully. Mar 19 11:33:31.429089 systemd[1]: run-netns-cni\x2d1682662d\x2dba63\x2d60b2\x2de556\x2d13fcc7acf6fd.mount: Deactivated successfully. Mar 19 11:33:31.429146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948-shm.mount: Deactivated successfully. Mar 19 11:33:32.064869 kubelet[2648]: I0319 11:33:32.064801 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3" Mar 19 11:33:32.065535 containerd[1461]: time="2025-03-19T11:33:32.065506457Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:32.065966 containerd[1461]: time="2025-03-19T11:33:32.065664992Z" level=info msg="Ensure that sandbox 2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3 in task-service has been cleanup successfully" Mar 19 11:33:32.066007 containerd[1461]: time="2025-03-19T11:33:32.065975503Z" level=info msg="TearDown network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" successfully" Mar 19 11:33:32.066007 containerd[1461]: time="2025-03-19T11:33:32.065991065Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" returns successfully" Mar 19 11:33:32.067200 containerd[1461]: time="2025-03-19T11:33:32.067165142Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:32.068276 containerd[1461]: time="2025-03-19T11:33:32.067427168Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:32.068276 containerd[1461]: time="2025-03-19T11:33:32.067440369Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:32.068404 kubelet[2648]: I0319 11:33:32.067799 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63" Mar 19 11:33:32.067612 systemd[1]: run-netns-cni\x2d443c0c77\x2d308f\x2d021b\x2d07e6\x2d4607ebe9435a.mount: Deactivated successfully. Mar 19 11:33:32.068755 containerd[1461]: time="2025-03-19T11:33:32.068719416Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:32.069276 containerd[1461]: time="2025-03-19T11:33:32.068722976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:2,}" Mar 19 11:33:32.069276 containerd[1461]: time="2025-03-19T11:33:32.069215505Z" level=info msg="Ensure that sandbox e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63 in task-service has been cleanup successfully" Mar 19 11:33:32.070584 containerd[1461]: time="2025-03-19T11:33:32.069918815Z" level=info msg="TearDown network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" successfully" Mar 19 11:33:32.070584 containerd[1461]: time="2025-03-19T11:33:32.069945058Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" returns successfully" Mar 19 11:33:32.071027 systemd[1]: run-netns-cni\x2dc897cd45\x2d0ffb\x2dab90\x2d762d\x2d769bbbeaacf2.mount: Deactivated successfully. Mar 19 11:33:32.071535 containerd[1461]: time="2025-03-19T11:33:32.071503293Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:32.071599 containerd[1461]: time="2025-03-19T11:33:32.071587581Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:32.071625 containerd[1461]: time="2025-03-19T11:33:32.071598222Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:32.072153 kubelet[2648]: I0319 11:33:32.071916 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468" Mar 19 11:33:32.072153 kubelet[2648]: E0319 11:33:32.072060 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:32.072462 containerd[1461]: time="2025-03-19T11:33:32.072354657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:2,}" Mar 19 11:33:32.073308 containerd[1461]: time="2025-03-19T11:33:32.072661568Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:32.073308 containerd[1461]: time="2025-03-19T11:33:32.072820664Z" level=info msg="Ensure that sandbox d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468 in task-service has been cleanup successfully" Mar 19 11:33:32.074590 systemd[1]: run-netns-cni\x2d45ec28c2\x2ddb91\x2d519a\x2d3b96\x2dac1da6eaffec.mount: Deactivated successfully. Mar 19 11:33:32.074748 containerd[1461]: time="2025-03-19T11:33:32.074692930Z" level=info msg="TearDown network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" successfully" Mar 19 11:33:32.074748 containerd[1461]: time="2025-03-19T11:33:32.074725813Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" returns successfully" Mar 19 11:33:32.076284 containerd[1461]: time="2025-03-19T11:33:32.075890569Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:32.076284 containerd[1461]: time="2025-03-19T11:33:32.075973537Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:32.076284 containerd[1461]: time="2025-03-19T11:33:32.075983458Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:32.076473 kubelet[2648]: E0319 11:33:32.076156 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:32.076528 containerd[1461]: time="2025-03-19T11:33:32.076494749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:2,}" Mar 19 11:33:32.077455 kubelet[2648]: I0319 11:33:32.077433 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027" Mar 19 11:33:32.078824 containerd[1461]: time="2025-03-19T11:33:32.078758254Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:32.079895 containerd[1461]: time="2025-03-19T11:33:32.079289707Z" level=info msg="Ensure that sandbox f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027 in task-service has been cleanup successfully" Mar 19 11:33:32.080267 containerd[1461]: time="2025-03-19T11:33:32.080183715Z" level=info msg="TearDown network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" successfully" Mar 19 11:33:32.080267 containerd[1461]: time="2025-03-19T11:33:32.080259083Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" returns successfully" Mar 19 11:33:32.082489 containerd[1461]: time="2025-03-19T11:33:32.082442460Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:32.082604 containerd[1461]: time="2025-03-19T11:33:32.082583074Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:32.082604 containerd[1461]: time="2025-03-19T11:33:32.082599076Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:32.082925 kubelet[2648]: I0319 11:33:32.082898 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0" Mar 19 11:33:32.083204 systemd[1]: run-netns-cni\x2d72c0f9e8\x2da9ed\x2dec8f\x2d0730\x2d84d10c1acd4d.mount: Deactivated successfully. Mar 19 11:33:32.084226 containerd[1461]: time="2025-03-19T11:33:32.083221697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:2,}" Mar 19 11:33:32.084372 containerd[1461]: time="2025-03-19T11:33:32.084347849Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:32.084753 containerd[1461]: time="2025-03-19T11:33:32.084538668Z" level=info msg="Ensure that sandbox 78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0 in task-service has been cleanup successfully" Mar 19 11:33:32.085079 containerd[1461]: time="2025-03-19T11:33:32.084974752Z" level=info msg="TearDown network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" successfully" Mar 19 11:33:32.085123 containerd[1461]: time="2025-03-19T11:33:32.085076842Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" returns successfully" Mar 19 11:33:32.085522 containerd[1461]: time="2025-03-19T11:33:32.085500444Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:32.085604 containerd[1461]: time="2025-03-19T11:33:32.085588533Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:32.085732 containerd[1461]: time="2025-03-19T11:33:32.085603174Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:32.085773 kubelet[2648]: I0319 11:33:32.085592 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997" Mar 19 11:33:32.086357 containerd[1461]: time="2025-03-19T11:33:32.086329326Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:32.086427 containerd[1461]: time="2025-03-19T11:33:32.086376571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:2,}" Mar 19 11:33:32.086904 containerd[1461]: time="2025-03-19T11:33:32.086501143Z" level=info msg="Ensure that sandbox 7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997 in task-service has been cleanup successfully" Mar 19 11:33:32.086904 containerd[1461]: time="2025-03-19T11:33:32.086712524Z" level=info msg="TearDown network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" successfully" Mar 19 11:33:32.086904 containerd[1461]: time="2025-03-19T11:33:32.086727246Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" returns successfully" Mar 19 11:33:32.087286 containerd[1461]: time="2025-03-19T11:33:32.087251578Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:32.087431 containerd[1461]: time="2025-03-19T11:33:32.087327025Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:32.087431 containerd[1461]: time="2025-03-19T11:33:32.087336906Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:32.087985 containerd[1461]: time="2025-03-19T11:33:32.087960488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:2,}" Mar 19 11:33:32.177047 containerd[1461]: time="2025-03-19T11:33:32.176977616Z" level=error msg="Failed to destroy network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.178145 containerd[1461]: time="2025-03-19T11:33:32.178102488Z" level=error msg="encountered an error cleaning up failed sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.178247 containerd[1461]: time="2025-03-19T11:33:32.178174855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.179076 kubelet[2648]: E0319 11:33:32.178669 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.179076 kubelet[2648]: E0319 11:33:32.178758 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:32.179076 kubelet[2648]: E0319 11:33:32.178779 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:32.179620 kubelet[2648]: E0319 11:33:32.178828 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zlsx7" podUID="547e076a-acd7-4d2f-97da-f3027e556484" Mar 19 11:33:32.198229 containerd[1461]: time="2025-03-19T11:33:32.198150200Z" level=error msg="Failed to destroy network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.198762 containerd[1461]: time="2025-03-19T11:33:32.198723977Z" level=error msg="Failed to destroy network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.201880 containerd[1461]: time="2025-03-19T11:33:32.201835567Z" level=error msg="encountered an error cleaning up failed sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.202076 containerd[1461]: time="2025-03-19T11:33:32.202052388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.202251 containerd[1461]: time="2025-03-19T11:33:32.201852648Z" level=error msg="encountered an error cleaning up failed sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.202317 containerd[1461]: time="2025-03-19T11:33:32.202279491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.202571 kubelet[2648]: E0319 11:33:32.202533 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.202632 kubelet[2648]: E0319 11:33:32.202596 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:32.202632 kubelet[2648]: E0319 11:33:32.202618 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:32.202686 kubelet[2648]: E0319 11:33:32.202653 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7zd7r" podUID="07fd37a8-ef23-49a6-a372-10e4ce8f9811" Mar 19 11:33:32.202778 kubelet[2648]: E0319 11:33:32.202538 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.202830 kubelet[2648]: E0319 11:33:32.202800 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:32.202860 kubelet[2648]: E0319 11:33:32.202828 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:32.202924 kubelet[2648]: E0319 11:33:32.202852 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" podUID="9811c23a-6a7b-41b7-8fa0-52983b899281" Mar 19 11:33:32.221064 containerd[1461]: time="2025-03-19T11:33:32.221011753Z" level=error msg="Failed to destroy network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.221586 containerd[1461]: time="2025-03-19T11:33:32.221547086Z" level=error msg="encountered an error cleaning up failed sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.221659 containerd[1461]: time="2025-03-19T11:33:32.221613412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.221908 kubelet[2648]: E0319 11:33:32.221870 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.221993 kubelet[2648]: E0319 11:33:32.221925 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:32.221993 kubelet[2648]: E0319 11:33:32.221944 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:32.221993 kubelet[2648]: E0319 11:33:32.221980 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" podUID="498e0970-d5ce-4bd8-8d9d-336f0a003145" Mar 19 11:33:32.223987 containerd[1461]: time="2025-03-19T11:33:32.223941444Z" level=error msg="Failed to destroy network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.224636 containerd[1461]: time="2025-03-19T11:33:32.224608910Z" level=error msg="encountered an error cleaning up failed sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.225647 containerd[1461]: time="2025-03-19T11:33:32.225604889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.225967 kubelet[2648]: E0319 11:33:32.225935 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.226043 kubelet[2648]: E0319 11:33:32.225988 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:32.226043 kubelet[2648]: E0319 11:33:32.226010 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:32.226095 kubelet[2648]: E0319 11:33:32.226052 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" podUID="0af97ae0-2493-4bbf-a605-0511940d25f4" Mar 19 11:33:32.234506 containerd[1461]: time="2025-03-19T11:33:32.234455969Z" level=error msg="Failed to destroy network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.234858 containerd[1461]: time="2025-03-19T11:33:32.234826446Z" level=error msg="encountered an error cleaning up failed sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.234906 containerd[1461]: time="2025-03-19T11:33:32.234890692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.235209 kubelet[2648]: E0319 11:33:32.235159 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:32.235259 kubelet[2648]: E0319 11:33:32.235222 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:32.235259 kubelet[2648]: E0319 11:33:32.235245 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:32.235316 kubelet[2648]: E0319 11:33:32.235286 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:32.428359 systemd[1]: run-netns-cni\x2d4db8c280\x2d5746\x2d283e\x2df616\x2d54312b5b50a8.mount: Deactivated successfully. Mar 19 11:33:32.428449 systemd[1]: run-netns-cni\x2db30307b4\x2d4c47\x2da6d4\x2dce84\x2dc09da2d95118.mount: Deactivated successfully. Mar 19 11:33:32.674882 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:43742.service - OpenSSH per-connection server daemon (10.0.0.1:43742). Mar 19 11:33:32.782937 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 43742 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:32.784294 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:32.788483 systemd-logind[1447]: New session 9 of user core. Mar 19 11:33:32.810938 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:33:32.932652 sshd[4086]: Connection closed by 10.0.0.1 port 43742 Mar 19 11:33:32.933547 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:32.937649 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:43742.service: Deactivated successfully. Mar 19 11:33:32.939618 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:33:32.942510 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:33:32.943618 systemd-logind[1447]: Removed session 9. Mar 19 11:33:33.089189 kubelet[2648]: I0319 11:33:33.089153 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a" Mar 19 11:33:33.090136 containerd[1461]: time="2025-03-19T11:33:33.089916524Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" Mar 19 11:33:33.090592 containerd[1461]: time="2025-03-19T11:33:33.090549985Z" level=info msg="Ensure that sandbox 0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a in task-service has been cleanup successfully" Mar 19 11:33:33.091253 containerd[1461]: time="2025-03-19T11:33:33.090752765Z" level=info msg="TearDown network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" successfully" Mar 19 11:33:33.091253 containerd[1461]: time="2025-03-19T11:33:33.090771366Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" returns successfully" Mar 19 11:33:33.091811 containerd[1461]: time="2025-03-19T11:33:33.091587685Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:33.091811 containerd[1461]: time="2025-03-19T11:33:33.091663012Z" level=info msg="TearDown network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" successfully" Mar 19 11:33:33.091811 containerd[1461]: time="2025-03-19T11:33:33.091673613Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" returns successfully" Mar 19 11:33:33.092480 containerd[1461]: time="2025-03-19T11:33:33.092077252Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:33.092480 containerd[1461]: time="2025-03-19T11:33:33.092154980Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:33.092480 containerd[1461]: time="2025-03-19T11:33:33.092164341Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:33.092583 kubelet[2648]: E0319 11:33:33.092340 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:33.093049 containerd[1461]: time="2025-03-19T11:33:33.092775360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:3,}" Mar 19 11:33:33.092951 systemd[1]: run-netns-cni\x2dae8a8587\x2d5a28\x2d3cb8\x2dc21f\x2d6edd29c31f17.mount: Deactivated successfully. Mar 19 11:33:33.093487 kubelet[2648]: I0319 11:33:33.093454 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009" Mar 19 11:33:33.095184 containerd[1461]: time="2025-03-19T11:33:33.095152029Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" Mar 19 11:33:33.095345 containerd[1461]: time="2025-03-19T11:33:33.095320765Z" level=info msg="Ensure that sandbox 8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009 in task-service has been cleanup successfully" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.095627154Z" level=info msg="TearDown network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" successfully" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.095647956Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" returns successfully" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.096036474Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.096109201Z" level=info msg="TearDown network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" successfully" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.096119042Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" returns successfully" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.096481437Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:33.096570 containerd[1461]: time="2025-03-19T11:33:33.096570765Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:33.096808 containerd[1461]: time="2025-03-19T11:33:33.096582126Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:33.098143 kubelet[2648]: I0319 11:33:33.096945 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984" Mar 19 11:33:33.097479 systemd[1]: run-netns-cni\x2dd53c6eae\x2d1e7c\x2dfee6\x2d370f\x2d0096c2674b59.mount: Deactivated successfully. Mar 19 11:33:33.098303 containerd[1461]: time="2025-03-19T11:33:33.097046291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:3,}" Mar 19 11:33:33.098438 containerd[1461]: time="2025-03-19T11:33:33.097337399Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" Mar 19 11:33:33.098775 kubelet[2648]: I0319 11:33:33.098746 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59" Mar 19 11:33:33.098931 containerd[1461]: time="2025-03-19T11:33:33.098735654Z" level=info msg="Ensure that sandbox 6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984 in task-service has been cleanup successfully" Mar 19 11:33:33.099179 containerd[1461]: time="2025-03-19T11:33:33.099142773Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" Mar 19 11:33:33.099512 containerd[1461]: time="2025-03-19T11:33:33.099308229Z" level=info msg="Ensure that sandbox 8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59 in task-service has been cleanup successfully" Mar 19 11:33:33.101212 systemd[1]: run-netns-cni\x2d67c90c17\x2dab49\x2dc57c\x2deb16\x2d17980223aa31.mount: Deactivated successfully. Mar 19 11:33:33.101321 systemd[1]: run-netns-cni\x2d4d327eba\x2d4001\x2d25a0\x2df3f8\x2dff7a3518a265.mount: Deactivated successfully. Mar 19 11:33:33.101654 containerd[1461]: time="2025-03-19T11:33:33.101552925Z" level=info msg="TearDown network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" successfully" Mar 19 11:33:33.101654 containerd[1461]: time="2025-03-19T11:33:33.101581168Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" returns successfully" Mar 19 11:33:33.102298 containerd[1461]: time="2025-03-19T11:33:33.102243312Z" level=info msg="TearDown network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" successfully" Mar 19 11:33:33.102298 containerd[1461]: time="2025-03-19T11:33:33.102265394Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" returns successfully" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102354483Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102426010Z" level=info msg="TearDown network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" successfully" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102434930Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" returns successfully" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102690075Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102750761Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102780484Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102791005Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102817367Z" level=info msg="TearDown network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" successfully" Mar 19 11:33:33.103026 containerd[1461]: time="2025-03-19T11:33:33.102827488Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" returns successfully" Mar 19 11:33:33.103333 containerd[1461]: time="2025-03-19T11:33:33.103173602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:3,}" Mar 19 11:33:33.103990 containerd[1461]: time="2025-03-19T11:33:33.103891631Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:33.104049 containerd[1461]: time="2025-03-19T11:33:33.104026364Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:33.104049 containerd[1461]: time="2025-03-19T11:33:33.104039645Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:33.104479 kubelet[2648]: E0319 11:33:33.104357 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:33.104690 containerd[1461]: time="2025-03-19T11:33:33.104598779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:3,}" Mar 19 11:33:33.149525 kubelet[2648]: I0319 11:33:33.147741 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de" Mar 19 11:33:33.149620 containerd[1461]: time="2025-03-19T11:33:33.148726071Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" Mar 19 11:33:33.149620 containerd[1461]: time="2025-03-19T11:33:33.149007298Z" level=info msg="Ensure that sandbox 6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de in task-service has been cleanup successfully" Mar 19 11:33:33.150805 containerd[1461]: time="2025-03-19T11:33:33.150071041Z" level=info msg="TearDown network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" successfully" Mar 19 11:33:33.150805 containerd[1461]: time="2025-03-19T11:33:33.150097883Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" returns successfully" Mar 19 11:33:33.150805 containerd[1461]: time="2025-03-19T11:33:33.150436996Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:33.150805 containerd[1461]: time="2025-03-19T11:33:33.150508083Z" level=info msg="TearDown network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" successfully" Mar 19 11:33:33.150805 containerd[1461]: time="2025-03-19T11:33:33.150518124Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" returns successfully" Mar 19 11:33:33.151569 containerd[1461]: time="2025-03-19T11:33:33.151540982Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:33.151763 containerd[1461]: time="2025-03-19T11:33:33.151721120Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:33.151763 containerd[1461]: time="2025-03-19T11:33:33.151740002Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:33.152443 containerd[1461]: time="2025-03-19T11:33:33.152356901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:3,}" Mar 19 11:33:33.153017 kubelet[2648]: I0319 11:33:33.152991 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b" Mar 19 11:33:33.153646 containerd[1461]: time="2025-03-19T11:33:33.153583859Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" Mar 19 11:33:33.153844 containerd[1461]: time="2025-03-19T11:33:33.153815082Z" level=info msg="Ensure that sandbox 48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b in task-service has been cleanup successfully" Mar 19 11:33:33.154050 containerd[1461]: time="2025-03-19T11:33:33.154030502Z" level=info msg="TearDown network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" successfully" Mar 19 11:33:33.154050 containerd[1461]: time="2025-03-19T11:33:33.154048504Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" returns successfully" Mar 19 11:33:33.155350 containerd[1461]: time="2025-03-19T11:33:33.155170292Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:33.155350 containerd[1461]: time="2025-03-19T11:33:33.155268862Z" level=info msg="TearDown network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" successfully" Mar 19 11:33:33.155350 containerd[1461]: time="2025-03-19T11:33:33.155280023Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" returns successfully" Mar 19 11:33:33.155667 containerd[1461]: time="2025-03-19T11:33:33.155649098Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:33.155931 containerd[1461]: time="2025-03-19T11:33:33.155871480Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:33.155931 containerd[1461]: time="2025-03-19T11:33:33.155889481Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:33.156888 containerd[1461]: time="2025-03-19T11:33:33.156570627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:3,}" Mar 19 11:33:33.265995 containerd[1461]: time="2025-03-19T11:33:33.265938526Z" level=error msg="Failed to destroy network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.266347 containerd[1461]: time="2025-03-19T11:33:33.266311122Z" level=error msg="encountered an error cleaning up failed sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.266399 containerd[1461]: time="2025-03-19T11:33:33.266372008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.266656 kubelet[2648]: E0319 11:33:33.266587 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.266768 kubelet[2648]: E0319 11:33:33.266667 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:33.266768 kubelet[2648]: E0319 11:33:33.266757 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:33.266824 kubelet[2648]: E0319 11:33:33.266803 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zlsx7" podUID="547e076a-acd7-4d2f-97da-f3027e556484" Mar 19 11:33:33.283070 containerd[1461]: time="2025-03-19T11:33:33.283017612Z" level=error msg="Failed to destroy network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.283386 containerd[1461]: time="2025-03-19T11:33:33.283361005Z" level=error msg="encountered an error cleaning up failed sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.283451 containerd[1461]: time="2025-03-19T11:33:33.283428732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.283660 kubelet[2648]: E0319 11:33:33.283626 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.283721 kubelet[2648]: E0319 11:33:33.283683 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:33.283721 kubelet[2648]: E0319 11:33:33.283714 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:33.283794 kubelet[2648]: E0319 11:33:33.283758 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7zd7r" podUID="07fd37a8-ef23-49a6-a372-10e4ce8f9811" Mar 19 11:33:33.287379 containerd[1461]: time="2025-03-19T11:33:33.287344189Z" level=error msg="Failed to destroy network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.288269 containerd[1461]: time="2025-03-19T11:33:33.288144946Z" level=error msg="encountered an error cleaning up failed sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.288269 containerd[1461]: time="2025-03-19T11:33:33.288219193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.288539 kubelet[2648]: E0319 11:33:33.288505 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.288595 kubelet[2648]: E0319 11:33:33.288554 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:33.288595 kubelet[2648]: E0319 11:33:33.288573 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:33.288727 kubelet[2648]: E0319 11:33:33.288631 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" podUID="0af97ae0-2493-4bbf-a605-0511940d25f4" Mar 19 11:33:33.300314 containerd[1461]: time="2025-03-19T11:33:33.300260153Z" level=error msg="Failed to destroy network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.303022 containerd[1461]: time="2025-03-19T11:33:33.302967694Z" level=error msg="encountered an error cleaning up failed sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.303098 containerd[1461]: time="2025-03-19T11:33:33.303048862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.303679 kubelet[2648]: E0319 11:33:33.303271 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.303679 kubelet[2648]: E0319 11:33:33.303330 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:33.303679 kubelet[2648]: E0319 11:33:33.303353 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:33.303846 kubelet[2648]: E0319 11:33:33.303396 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" podUID="9811c23a-6a7b-41b7-8fa0-52983b899281" Mar 19 11:33:33.306619 containerd[1461]: time="2025-03-19T11:33:33.306585843Z" level=error msg="Failed to destroy network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.307017 containerd[1461]: time="2025-03-19T11:33:33.306989242Z" level=error msg="encountered an error cleaning up failed sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.307144 containerd[1461]: time="2025-03-19T11:33:33.307122375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.307403 kubelet[2648]: E0319 11:33:33.307376 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.307723 kubelet[2648]: E0319 11:33:33.307580 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:33.307723 kubelet[2648]: E0319 11:33:33.307617 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:33.307723 kubelet[2648]: E0319 11:33:33.307662 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" podUID="498e0970-d5ce-4bd8-8d9d-336f0a003145" Mar 19 11:33:33.314626 containerd[1461]: time="2025-03-19T11:33:33.314505686Z" level=error msg="Failed to destroy network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.315133 containerd[1461]: time="2025-03-19T11:33:33.315084062Z" level=error msg="encountered an error cleaning up failed sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.315193 containerd[1461]: time="2025-03-19T11:33:33.315147108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.315334 kubelet[2648]: E0319 11:33:33.315300 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:33.315391 kubelet[2648]: E0319 11:33:33.315350 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:33.315391 kubelet[2648]: E0319 11:33:33.315369 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:33.315460 kubelet[2648]: E0319 11:33:33.315418 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:33.428914 systemd[1]: run-netns-cni\x2d38c22115\x2d9848\x2d4528\x2ddeba\x2dd8e40862c6f8.mount: Deactivated successfully. Mar 19 11:33:33.428999 systemd[1]: run-netns-cni\x2da69d9fcd\x2d83d7\x2d384d\x2db516\x2d85d748641157.mount: Deactivated successfully. Mar 19 11:33:34.157442 kubelet[2648]: I0319 11:33:34.157403 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f" Mar 19 11:33:34.159528 containerd[1461]: time="2025-03-19T11:33:34.158174781Z" level=info msg="StopPodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\"" Mar 19 11:33:34.159528 containerd[1461]: time="2025-03-19T11:33:34.158339276Z" level=info msg="Ensure that sandbox ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f in task-service has been cleanup successfully" Mar 19 11:33:34.159528 containerd[1461]: time="2025-03-19T11:33:34.158735953Z" level=info msg="TearDown network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" successfully" Mar 19 11:33:34.159528 containerd[1461]: time="2025-03-19T11:33:34.158767036Z" level=info msg="StopPodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" returns successfully" Mar 19 11:33:34.161285 containerd[1461]: time="2025-03-19T11:33:34.159890541Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" Mar 19 11:33:34.161285 containerd[1461]: time="2025-03-19T11:33:34.159965828Z" level=info msg="TearDown network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" successfully" Mar 19 11:33:34.161285 containerd[1461]: time="2025-03-19T11:33:34.159975509Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" returns successfully" Mar 19 11:33:34.160554 systemd[1]: run-netns-cni\x2df3201790\x2d61eb\x2d1d7a\x2dee27\x2dd15b63d0510c.mount: Deactivated successfully. Mar 19 11:33:34.167673 kubelet[2648]: I0319 11:33:34.163051 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62" Mar 19 11:33:34.167673 kubelet[2648]: E0319 11:33:34.167103 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:34.166105 systemd[1]: run-netns-cni\x2d060d1adc\x2d3dcc\x2d7baa\x2dc491\x2d55c696257e15.mount: Deactivated successfully. Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.163836630Z" level=info msg="StopPodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\"" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.163994485Z" level=info msg="Ensure that sandbox 6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62 in task-service has been cleanup successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.164324116Z" level=info msg="TearDown network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.164337837Z" level=info msg="StopPodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" returns successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.164908491Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.164997539Z" level=info msg="TearDown network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165009140Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" returns successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165073826Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165124271Z" level=info msg="TearDown network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165132312Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" returns successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165204878Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165287846Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165297527Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165490825Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165663201Z" level=info msg="TearDown network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165673402Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" returns successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165842058Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165896743Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.165930746Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.167545417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:4,}" Mar 19 11:33:34.168847 containerd[1461]: time="2025-03-19T11:33:34.168278566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:4,}" Mar 19 11:33:34.171159 kubelet[2648]: I0319 11:33:34.171018 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411" Mar 19 11:33:34.171907 containerd[1461]: time="2025-03-19T11:33:34.171690525Z" level=info msg="StopPodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\"" Mar 19 11:33:34.172176 containerd[1461]: time="2025-03-19T11:33:34.172153248Z" level=info msg="Ensure that sandbox 87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411 in task-service has been cleanup successfully" Mar 19 11:33:34.173444 containerd[1461]: time="2025-03-19T11:33:34.172510442Z" level=info msg="TearDown network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" successfully" Mar 19 11:33:34.173444 containerd[1461]: time="2025-03-19T11:33:34.172536804Z" level=info msg="StopPodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" returns successfully" Mar 19 11:33:34.173444 containerd[1461]: time="2025-03-19T11:33:34.172968084Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" Mar 19 11:33:34.173444 containerd[1461]: time="2025-03-19T11:33:34.173050732Z" level=info msg="TearDown network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" successfully" Mar 19 11:33:34.173444 containerd[1461]: time="2025-03-19T11:33:34.173060853Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" returns successfully" Mar 19 11:33:34.173626 containerd[1461]: time="2025-03-19T11:33:34.173592983Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:34.173734 containerd[1461]: time="2025-03-19T11:33:34.173673870Z" level=info msg="TearDown network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" successfully" Mar 19 11:33:34.173734 containerd[1461]: time="2025-03-19T11:33:34.173690432Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" returns successfully" Mar 19 11:33:34.175776 containerd[1461]: time="2025-03-19T11:33:34.174974232Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:34.175776 containerd[1461]: time="2025-03-19T11:33:34.175073681Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:34.175776 containerd[1461]: time="2025-03-19T11:33:34.175084002Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:34.175193 systemd[1]: run-netns-cni\x2d718e4f67\x2d4069\x2dddb6\x2d60fe\x2d7b57021baf92.mount: Deactivated successfully. Mar 19 11:33:34.177303 containerd[1461]: time="2025-03-19T11:33:34.176834126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:4,}" Mar 19 11:33:34.178102 kubelet[2648]: I0319 11:33:34.178076 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080" Mar 19 11:33:34.178739 containerd[1461]: time="2025-03-19T11:33:34.178687579Z" level=info msg="StopPodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\"" Mar 19 11:33:34.178976 containerd[1461]: time="2025-03-19T11:33:34.178953284Z" level=info msg="Ensure that sandbox 255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080 in task-service has been cleanup successfully" Mar 19 11:33:34.180702 containerd[1461]: time="2025-03-19T11:33:34.179181505Z" level=info msg="TearDown network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" successfully" Mar 19 11:33:34.180890 containerd[1461]: time="2025-03-19T11:33:34.180863863Z" level=info msg="StopPodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" returns successfully" Mar 19 11:33:34.181842 containerd[1461]: time="2025-03-19T11:33:34.181299784Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" Mar 19 11:33:34.181842 containerd[1461]: time="2025-03-19T11:33:34.181385752Z" level=info msg="TearDown network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" successfully" Mar 19 11:33:34.181842 containerd[1461]: time="2025-03-19T11:33:34.181400233Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" returns successfully" Mar 19 11:33:34.181429 systemd[1]: run-netns-cni\x2dbac4caee\x2d973a\x2d43e8\x2d5f9d\x2dd49e2340fac4.mount: Deactivated successfully. Mar 19 11:33:34.182478 kubelet[2648]: I0319 11:33:34.182431 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4" Mar 19 11:33:34.182727 containerd[1461]: time="2025-03-19T11:33:34.182677032Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:34.182926 containerd[1461]: time="2025-03-19T11:33:34.182899453Z" level=info msg="StopPodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\"" Mar 19 11:33:34.183221 containerd[1461]: time="2025-03-19T11:33:34.183011224Z" level=info msg="TearDown network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" successfully" Mar 19 11:33:34.183221 containerd[1461]: time="2025-03-19T11:33:34.183052027Z" level=info msg="Ensure that sandbox e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4 in task-service has been cleanup successfully" Mar 19 11:33:34.183299 containerd[1461]: time="2025-03-19T11:33:34.183227964Z" level=info msg="TearDown network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" successfully" Mar 19 11:33:34.183299 containerd[1461]: time="2025-03-19T11:33:34.183241845Z" level=info msg="StopPodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" returns successfully" Mar 19 11:33:34.183481 containerd[1461]: time="2025-03-19T11:33:34.183400060Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" returns successfully" Mar 19 11:33:34.183592 containerd[1461]: time="2025-03-19T11:33:34.183564595Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" Mar 19 11:33:34.184512 containerd[1461]: time="2025-03-19T11:33:34.183648563Z" level=info msg="TearDown network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" successfully" Mar 19 11:33:34.184512 containerd[1461]: time="2025-03-19T11:33:34.184501563Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" returns successfully" Mar 19 11:33:34.184603 containerd[1461]: time="2025-03-19T11:33:34.184083724Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:34.184640 containerd[1461]: time="2025-03-19T11:33:34.184625935Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:34.184640 containerd[1461]: time="2025-03-19T11:33:34.184634816Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:34.185581 containerd[1461]: time="2025-03-19T11:33:34.185157784Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:34.185581 containerd[1461]: time="2025-03-19T11:33:34.185236512Z" level=info msg="TearDown network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" successfully" Mar 19 11:33:34.185581 containerd[1461]: time="2025-03-19T11:33:34.185246273Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" returns successfully" Mar 19 11:33:34.185581 containerd[1461]: time="2025-03-19T11:33:34.185320000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:4,}" Mar 19 11:33:34.186153 containerd[1461]: time="2025-03-19T11:33:34.186118834Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:34.186689 containerd[1461]: time="2025-03-19T11:33:34.186213283Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:34.186689 containerd[1461]: time="2025-03-19T11:33:34.186431024Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:34.187609 kubelet[2648]: I0319 11:33:34.186286 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5" Mar 19 11:33:34.187672 containerd[1461]: time="2025-03-19T11:33:34.186864984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:4,}" Mar 19 11:33:34.187672 containerd[1461]: time="2025-03-19T11:33:34.186911668Z" level=info msg="StopPodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\"" Mar 19 11:33:34.187672 containerd[1461]: time="2025-03-19T11:33:34.187052042Z" level=info msg="Ensure that sandbox 43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5 in task-service has been cleanup successfully" Mar 19 11:33:34.187672 containerd[1461]: time="2025-03-19T11:33:34.187284183Z" level=info msg="TearDown network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" successfully" Mar 19 11:33:34.187672 containerd[1461]: time="2025-03-19T11:33:34.187299585Z" level=info msg="StopPodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" returns successfully" Mar 19 11:33:34.188149 containerd[1461]: time="2025-03-19T11:33:34.188122142Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" Mar 19 11:33:34.188329 containerd[1461]: time="2025-03-19T11:33:34.188312319Z" level=info msg="TearDown network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" successfully" Mar 19 11:33:34.188393 containerd[1461]: time="2025-03-19T11:33:34.188371565Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" returns successfully" Mar 19 11:33:34.188862 containerd[1461]: time="2025-03-19T11:33:34.188802085Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:34.189012 containerd[1461]: time="2025-03-19T11:33:34.188949139Z" level=info msg="TearDown network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" successfully" Mar 19 11:33:34.189012 containerd[1461]: time="2025-03-19T11:33:34.189003744Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" returns successfully" Mar 19 11:33:34.189352 containerd[1461]: time="2025-03-19T11:33:34.189321574Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:34.189438 containerd[1461]: time="2025-03-19T11:33:34.189419383Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:34.189438 containerd[1461]: time="2025-03-19T11:33:34.189434624Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:34.189653 kubelet[2648]: E0319 11:33:34.189632 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:34.190039 containerd[1461]: time="2025-03-19T11:33:34.189999957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:4,}" Mar 19 11:33:34.427202 systemd[1]: run-netns-cni\x2d885e247f\x2db461\x2d15ee\x2d8ed7\x2d8db5c978bf33.mount: Deactivated successfully. Mar 19 11:33:34.427303 systemd[1]: run-netns-cni\x2da8ea6592\x2dae7c\x2df1d8\x2d9e6f\x2ddd3b24551cda.mount: Deactivated successfully. Mar 19 11:33:34.681048 containerd[1461]: time="2025-03-19T11:33:34.680838983Z" level=error msg="Failed to destroy network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.681344 containerd[1461]: time="2025-03-19T11:33:34.681230060Z" level=error msg="encountered an error cleaning up failed sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.681344 containerd[1461]: time="2025-03-19T11:33:34.681295146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.681654 kubelet[2648]: E0319 11:33:34.681567 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.681654 kubelet[2648]: E0319 11:33:34.681628 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:34.681654 kubelet[2648]: E0319 11:33:34.681647 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zqr5d" Mar 19 11:33:34.683299 kubelet[2648]: E0319 11:33:34.681684 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zqr5d_calico-system(bbfe9adc-4e2f-44ac-a3f4-b25842fbe645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zqr5d" podUID="bbfe9adc-4e2f-44ac-a3f4-b25842fbe645" Mar 19 11:33:34.688149 containerd[1461]: time="2025-03-19T11:33:34.688107663Z" level=error msg="Failed to destroy network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.689444 containerd[1461]: time="2025-03-19T11:33:34.689402424Z" level=error msg="encountered an error cleaning up failed sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.689850 containerd[1461]: time="2025-03-19T11:33:34.689823583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.690782 kubelet[2648]: E0319 11:33:34.690742 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.690866 kubelet[2648]: E0319 11:33:34.690809 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:34.690866 kubelet[2648]: E0319 11:33:34.690830 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7zd7r" Mar 19 11:33:34.690913 kubelet[2648]: E0319 11:33:34.690869 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7zd7r_kube-system(07fd37a8-ef23-49a6-a372-10e4ce8f9811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7zd7r" podUID="07fd37a8-ef23-49a6-a372-10e4ce8f9811" Mar 19 11:33:34.693834 containerd[1461]: time="2025-03-19T11:33:34.693800995Z" level=error msg="Failed to destroy network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.694264 containerd[1461]: time="2025-03-19T11:33:34.694225755Z" level=error msg="encountered an error cleaning up failed sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.694581 containerd[1461]: time="2025-03-19T11:33:34.694547345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.694880 kubelet[2648]: E0319 11:33:34.694740 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.694880 kubelet[2648]: E0319 11:33:34.694788 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:34.694880 kubelet[2648]: E0319 11:33:34.694806 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" Mar 19 11:33:34.694981 kubelet[2648]: E0319 11:33:34.694837 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-lckq4_calico-apiserver(9811c23a-6a7b-41b7-8fa0-52983b899281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" podUID="9811c23a-6a7b-41b7-8fa0-52983b899281" Mar 19 11:33:34.695259 containerd[1461]: time="2025-03-19T11:33:34.695232209Z" level=error msg="Failed to destroy network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.695648 containerd[1461]: time="2025-03-19T11:33:34.695620365Z" level=error msg="encountered an error cleaning up failed sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.695791 containerd[1461]: time="2025-03-19T11:33:34.695768339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.696039 kubelet[2648]: E0319 11:33:34.696016 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.696329 kubelet[2648]: E0319 11:33:34.696229 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:34.696329 kubelet[2648]: E0319 11:33:34.696256 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zlsx7" Mar 19 11:33:34.696329 kubelet[2648]: E0319 11:33:34.696295 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zlsx7_kube-system(547e076a-acd7-4d2f-97da-f3027e556484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zlsx7" podUID="547e076a-acd7-4d2f-97da-f3027e556484" Mar 19 11:33:34.702441 containerd[1461]: time="2025-03-19T11:33:34.702396359Z" level=error msg="Failed to destroy network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.703345 containerd[1461]: time="2025-03-19T11:33:34.703289763Z" level=error msg="encountered an error cleaning up failed sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.703909 containerd[1461]: time="2025-03-19T11:33:34.703824733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.704050 kubelet[2648]: E0319 11:33:34.704021 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.704104 kubelet[2648]: E0319 11:33:34.704078 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:34.704104 kubelet[2648]: E0319 11:33:34.704096 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" Mar 19 11:33:34.704170 kubelet[2648]: E0319 11:33:34.704136 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689bbc887b-2vs8c_calico-system(0af97ae0-2493-4bbf-a605-0511940d25f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" podUID="0af97ae0-2493-4bbf-a605-0511940d25f4" Mar 19 11:33:34.715057 containerd[1461]: time="2025-03-19T11:33:34.715018540Z" level=error msg="Failed to destroy network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.716013 containerd[1461]: time="2025-03-19T11:33:34.715376653Z" level=error msg="encountered an error cleaning up failed sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.716013 containerd[1461]: time="2025-03-19T11:33:34.715621836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.716126 kubelet[2648]: E0319 11:33:34.715843 2648 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 19 11:33:34.716126 kubelet[2648]: E0319 11:33:34.715892 2648 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:34.716126 kubelet[2648]: E0319 11:33:34.715909 2648 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" Mar 19 11:33:34.716195 kubelet[2648]: E0319 11:33:34.715941 2648 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c7dddc8f-6t24c_calico-apiserver(498e0970-d5ce-4bd8-8d9d-336f0a003145)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" podUID="498e0970-d5ce-4bd8-8d9d-336f0a003145" Mar 19 11:33:34.874094 containerd[1461]: time="2025-03-19T11:33:34.874043052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:34.874780 containerd[1461]: time="2025-03-19T11:33:34.874724476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 19 11:33:34.875835 containerd[1461]: time="2025-03-19T11:33:34.875804577Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:34.878107 containerd[1461]: time="2025-03-19T11:33:34.878041586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:34.878773 containerd[1461]: time="2025-03-19T11:33:34.878750173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.820804985s" Mar 19 11:33:34.878942 containerd[1461]: time="2025-03-19T11:33:34.878835101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 19 11:33:34.889020 containerd[1461]: time="2025-03-19T11:33:34.888881120Z" level=info msg="CreateContainer within sandbox \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 19 11:33:34.911173 containerd[1461]: time="2025-03-19T11:33:34.911130561Z" level=info msg="CreateContainer within sandbox \"db44cbbf426ca4d04d076c18e60480231ec6d38ce6aa207c77d069457cdc4fa8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b497c0e6f5269e6828dae930ad69dbd65284c8a5c5215d9c5c7637e17e77dbff\"" Mar 19 11:33:34.911953 containerd[1461]: time="2025-03-19T11:33:34.911894152Z" level=info msg="StartContainer for \"b497c0e6f5269e6828dae930ad69dbd65284c8a5c5215d9c5c7637e17e77dbff\"" Mar 19 11:33:34.959872 systemd[1]: Started cri-containerd-b497c0e6f5269e6828dae930ad69dbd65284c8a5c5215d9c5c7637e17e77dbff.scope - libcontainer container b497c0e6f5269e6828dae930ad69dbd65284c8a5c5215d9c5c7637e17e77dbff. Mar 19 11:33:34.984272 containerd[1461]: time="2025-03-19T11:33:34.984232638Z" level=info msg="StartContainer for \"b497c0e6f5269e6828dae930ad69dbd65284c8a5c5215d9c5c7637e17e77dbff\" returns successfully" Mar 19 11:33:35.160174 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 19 11:33:35.160400 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 19 11:33:35.194758 kubelet[2648]: E0319 11:33:35.194666 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:35.201194 kubelet[2648]: I0319 11:33:35.201155 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2" Mar 19 11:33:35.202363 containerd[1461]: time="2025-03-19T11:33:35.202241254Z" level=info msg="StopPodSandbox for \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\"" Mar 19 11:33:35.202640 containerd[1461]: time="2025-03-19T11:33:35.202408229Z" level=info msg="Ensure that sandbox 7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2 in task-service has been cleanup successfully" Mar 19 11:33:35.203498 containerd[1461]: time="2025-03-19T11:33:35.202803505Z" level=info msg="TearDown network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\" successfully" Mar 19 11:33:35.203498 containerd[1461]: time="2025-03-19T11:33:35.202827387Z" level=info msg="StopPodSandbox for \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\" returns successfully" Mar 19 11:33:35.203498 containerd[1461]: time="2025-03-19T11:33:35.203336073Z" level=info msg="StopPodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\"" Mar 19 11:33:35.203498 containerd[1461]: time="2025-03-19T11:33:35.203417441Z" level=info msg="TearDown network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" successfully" Mar 19 11:33:35.203498 containerd[1461]: time="2025-03-19T11:33:35.203428122Z" level=info msg="StopPodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" returns successfully" Mar 19 11:33:35.204715 containerd[1461]: time="2025-03-19T11:33:35.204055059Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" Mar 19 11:33:35.204888 containerd[1461]: time="2025-03-19T11:33:35.204774964Z" level=info msg="TearDown network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" successfully" Mar 19 11:33:35.204888 containerd[1461]: time="2025-03-19T11:33:35.204790646Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" returns successfully" Mar 19 11:33:35.205489 containerd[1461]: time="2025-03-19T11:33:35.205352497Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:35.205489 containerd[1461]: time="2025-03-19T11:33:35.205448985Z" level=info msg="TearDown network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" successfully" Mar 19 11:33:35.205651 containerd[1461]: time="2025-03-19T11:33:35.205460346Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" returns successfully" Mar 19 11:33:35.206562 containerd[1461]: time="2025-03-19T11:33:35.206041319Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:35.206562 containerd[1461]: time="2025-03-19T11:33:35.206115446Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:35.206562 containerd[1461]: time="2025-03-19T11:33:35.206124727Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:35.208640 containerd[1461]: time="2025-03-19T11:33:35.208603312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:5,}" Mar 19 11:33:35.209724 kubelet[2648]: I0319 11:33:35.209679 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377" Mar 19 11:33:35.210358 containerd[1461]: time="2025-03-19T11:33:35.210216699Z" level=info msg="StopPodSandbox for \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\"" Mar 19 11:33:35.210541 containerd[1461]: time="2025-03-19T11:33:35.210510165Z" level=info msg="Ensure that sandbox 7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377 in task-service has been cleanup successfully" Mar 19 11:33:35.210812 containerd[1461]: time="2025-03-19T11:33:35.210738426Z" level=info msg="TearDown network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\" successfully" Mar 19 11:33:35.210812 containerd[1461]: time="2025-03-19T11:33:35.210762988Z" level=info msg="StopPodSandbox for \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\" returns successfully" Mar 19 11:33:35.211759 containerd[1461]: time="2025-03-19T11:33:35.211718355Z" level=info msg="StopPodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\"" Mar 19 11:33:35.211915 containerd[1461]: time="2025-03-19T11:33:35.211815724Z" level=info msg="TearDown network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" successfully" Mar 19 11:33:35.211915 containerd[1461]: time="2025-03-19T11:33:35.211826525Z" level=info msg="StopPodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" returns successfully" Mar 19 11:33:35.213070 containerd[1461]: time="2025-03-19T11:33:35.212683763Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" Mar 19 11:33:35.213070 containerd[1461]: time="2025-03-19T11:33:35.212896662Z" level=info msg="TearDown network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" successfully" Mar 19 11:33:35.213070 containerd[1461]: time="2025-03-19T11:33:35.212912144Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" returns successfully" Mar 19 11:33:35.214932 containerd[1461]: time="2025-03-19T11:33:35.214446603Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:35.215015 containerd[1461]: time="2025-03-19T11:33:35.214940288Z" level=info msg="TearDown network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" successfully" Mar 19 11:33:35.215268 containerd[1461]: time="2025-03-19T11:33:35.215233394Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" returns successfully" Mar 19 11:33:35.216216 containerd[1461]: time="2025-03-19T11:33:35.215981142Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:35.216216 containerd[1461]: time="2025-03-19T11:33:35.216066230Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:35.216216 containerd[1461]: time="2025-03-19T11:33:35.216086232Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:35.219531 containerd[1461]: time="2025-03-19T11:33:35.219400573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:5,}" Mar 19 11:33:35.220246 kubelet[2648]: I0319 11:33:35.220182 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pb47d" podStartSLOduration=1.057515176 podStartE2EDuration="14.220162162s" podCreationTimestamp="2025-03-19 11:33:21 +0000 UTC" firstStartedPulling="2025-03-19 11:33:21.716920943 +0000 UTC m=+22.864171542" lastFinishedPulling="2025-03-19 11:33:34.879567969 +0000 UTC m=+36.026818528" observedRunningTime="2025-03-19 11:33:35.214665983 +0000 UTC m=+36.361916582" watchObservedRunningTime="2025-03-19 11:33:35.220162162 +0000 UTC m=+36.367412761" Mar 19 11:33:35.225585 kubelet[2648]: I0319 11:33:35.225551 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914" Mar 19 11:33:35.226268 containerd[1461]: time="2025-03-19T11:33:35.226230194Z" level=info msg="StopPodSandbox for \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\"" Mar 19 11:33:35.226422 containerd[1461]: time="2025-03-19T11:33:35.226400609Z" level=info msg="Ensure that sandbox e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914 in task-service has been cleanup successfully" Mar 19 11:33:35.229575 kubelet[2648]: I0319 11:33:35.229292 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda" Mar 19 11:33:35.229867 containerd[1461]: time="2025-03-19T11:33:35.229834681Z" level=info msg="StopPodSandbox for \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\"" Mar 19 11:33:35.230200 containerd[1461]: time="2025-03-19T11:33:35.230101785Z" level=info msg="Ensure that sandbox 5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda in task-service has been cleanup successfully" Mar 19 11:33:35.230603 containerd[1461]: time="2025-03-19T11:33:35.230475979Z" level=info msg="TearDown network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\" successfully" Mar 19 11:33:35.230603 containerd[1461]: time="2025-03-19T11:33:35.230602391Z" level=info msg="StopPodSandbox for \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\" returns successfully" Mar 19 11:33:35.231041 containerd[1461]: time="2025-03-19T11:33:35.230884497Z" level=info msg="StopPodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\"" Mar 19 11:33:35.231041 containerd[1461]: time="2025-03-19T11:33:35.230962144Z" level=info msg="TearDown network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" successfully" Mar 19 11:33:35.231041 containerd[1461]: time="2025-03-19T11:33:35.230973265Z" level=info msg="StopPodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" returns successfully" Mar 19 11:33:35.231549 containerd[1461]: time="2025-03-19T11:33:35.231475590Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" Mar 19 11:33:35.231549 containerd[1461]: time="2025-03-19T11:33:35.231560198Z" level=info msg="TearDown network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" successfully" Mar 19 11:33:35.231763 containerd[1461]: time="2025-03-19T11:33:35.231571919Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" returns successfully" Mar 19 11:33:35.231844 containerd[1461]: time="2025-03-19T11:33:35.231818541Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:35.231966 containerd[1461]: time="2025-03-19T11:33:35.231945393Z" level=info msg="TearDown network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" successfully" Mar 19 11:33:35.231966 containerd[1461]: time="2025-03-19T11:33:35.231961354Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" returns successfully" Mar 19 11:33:35.232296 containerd[1461]: time="2025-03-19T11:33:35.232270663Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:35.232374 containerd[1461]: time="2025-03-19T11:33:35.232353910Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:35.232374 containerd[1461]: time="2025-03-19T11:33:35.232368591Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:35.232983 kubelet[2648]: E0319 11:33:35.232913 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:35.233307 kubelet[2648]: I0319 11:33:35.232981 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9" Mar 19 11:33:35.233557 containerd[1461]: time="2025-03-19T11:33:35.233521216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:5,}" Mar 19 11:33:35.239380 kubelet[2648]: I0319 11:33:35.239301 2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90" Mar 19 11:33:35.242891 containerd[1461]: time="2025-03-19T11:33:35.242578079Z" level=info msg="TearDown network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\" successfully" Mar 19 11:33:35.242891 containerd[1461]: time="2025-03-19T11:33:35.242610202Z" level=info msg="StopPodSandbox for \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.243441598Z" level=info msg="StopPodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.243533086Z" level=info msg="TearDown network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.243566889Z" level=info msg="StopPodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.243800710Z" level=info msg="StopPodSandbox for \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.243959845Z" level=info msg="Ensure that sandbox bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9 in task-service has been cleanup successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244204907Z" level=info msg="TearDown network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244222229Z" level=info msg="StopPodSandbox for \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244259872Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244336359Z" level=info msg="TearDown network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244352440Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244339959Z" level=info msg="StopPodSandbox for \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244455250Z" level=info msg="StopPodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244507654Z" level=info msg="Ensure that sandbox 538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90 in task-service has been cleanup successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244534857Z" level=info msg="TearDown network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244545418Z" level=info msg="StopPodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244651548Z" level=info msg="TearDown network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244664509Z" level=info msg="StopPodSandbox for \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244931733Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244986698Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245069386Z" level=info msg="TearDown network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245079626Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.244992379Z" level=info msg="TearDown network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245120710Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245081387Z" level=info msg="StopPodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245393095Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245488424Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245500625Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245553509Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245611435Z" level=info msg="TearDown network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245619275Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.245990469Z" level=info msg="TearDown network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.246005711Z" level=info msg="StopPodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.246029833Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.246126362Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.246136682Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:35.246781 containerd[1461]: time="2025-03-19T11:33:35.246248613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:5,}" Mar 19 11:33:35.247658 containerd[1461]: time="2025-03-19T11:33:35.247482565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:5,}" Mar 19 11:33:35.247658 containerd[1461]: time="2025-03-19T11:33:35.247508407Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" Mar 19 11:33:35.247658 containerd[1461]: time="2025-03-19T11:33:35.247611937Z" level=info msg="TearDown network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" successfully" Mar 19 11:33:35.247658 containerd[1461]: time="2025-03-19T11:33:35.247624498Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" returns successfully" Mar 19 11:33:35.248607 containerd[1461]: time="2025-03-19T11:33:35.248427331Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:35.248607 containerd[1461]: time="2025-03-19T11:33:35.248527100Z" level=info msg="TearDown network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" successfully" Mar 19 11:33:35.248607 containerd[1461]: time="2025-03-19T11:33:35.248538021Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" returns successfully" Mar 19 11:33:35.250162 containerd[1461]: time="2025-03-19T11:33:35.250126365Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:35.252118 containerd[1461]: time="2025-03-19T11:33:35.250374748Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:35.252118 containerd[1461]: time="2025-03-19T11:33:35.250390829Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:35.252118 containerd[1461]: time="2025-03-19T11:33:35.251650543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:5,}" Mar 19 11:33:35.252234 kubelet[2648]: E0319 11:33:35.251299 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:35.440104 systemd[1]: run-netns-cni\x2da00a4b66\x2d4af4\x2dcd13\x2debcd\x2d3d99380e0373.mount: Deactivated successfully. Mar 19 11:33:35.440890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2-shm.mount: Deactivated successfully. Mar 19 11:33:35.440946 systemd[1]: run-netns-cni\x2d95162aae\x2dc7cd\x2da3b3\x2d2820\x2d2e1613c7b058.mount: Deactivated successfully. Mar 19 11:33:35.440993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90-shm.mount: Deactivated successfully. Mar 19 11:33:35.441051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724140479.mount: Deactivated successfully. Mar 19 11:33:35.783255 systemd-networkd[1395]: cali952f0c63678: Link UP Mar 19 11:33:35.785339 systemd-networkd[1395]: cali952f0c63678: Gained carrier Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.353 [INFO][4595] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.440 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zqr5d-eth0 csi-node-driver- calico-system bbfe9adc-4e2f-44ac-a3f4-b25842fbe645 613 0 2025-03-19 11:33:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zqr5d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali952f0c63678 [] []}} ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.440 [INFO][4595] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.687 [INFO][4687] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" HandleID="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Workload="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.715 [INFO][4687] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" HandleID="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Workload="localhost-k8s-csi--node--driver--zqr5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zqr5d", "timestamp":"2025-03-19 11:33:35.687132713 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.716 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.716 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.716 [INFO][4687] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.718 [INFO][4687] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.733 [INFO][4687] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.737 [INFO][4687] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.739 [INFO][4687] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.741 [INFO][4687] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.741 [INFO][4687] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.743 [INFO][4687] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658 Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.746 [INFO][4687] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.761 [INFO][4687] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.761 [INFO][4687] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" host="localhost" Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.761 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:33:35.814735 containerd[1461]: 2025-03-19 11:33:35.761 [INFO][4687] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" HandleID="k8s-pod-network.e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Workload="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.815322 containerd[1461]: 2025-03-19 11:33:35.769 [INFO][4595] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zqr5d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645", ResourceVersion:"613", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zqr5d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali952f0c63678", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.815322 containerd[1461]: 2025-03-19 11:33:35.770 [INFO][4595] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.815322 containerd[1461]: 2025-03-19 11:33:35.770 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali952f0c63678 ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.815322 containerd[1461]: 2025-03-19 11:33:35.783 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.815322 containerd[1461]: 2025-03-19 11:33:35.783 [INFO][4595] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zqr5d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bbfe9adc-4e2f-44ac-a3f4-b25842fbe645", ResourceVersion:"613", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658", Pod:"csi-node-driver-zqr5d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali952f0c63678", MAC:"6a:20:68:c4:46:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.815322 containerd[1461]: 2025-03-19 11:33:35.808 [INFO][4595] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658" Namespace="calico-system" Pod="csi-node-driver-zqr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--zqr5d-eth0" Mar 19 11:33:35.846451 containerd[1461]: time="2025-03-19T11:33:35.846189446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:35.846451 containerd[1461]: time="2025-03-19T11:33:35.846257172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:35.846451 containerd[1461]: time="2025-03-19T11:33:35.846271653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.846633 containerd[1461]: time="2025-03-19T11:33:35.846443669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.863231 systemd-networkd[1395]: cali58d993c88e7: Link UP Mar 19 11:33:35.863868 systemd-networkd[1395]: cali58d993c88e7: Gained carrier Mar 19 11:33:35.866830 systemd[1]: Started cri-containerd-e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658.scope - libcontainer container e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658. Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.452 [INFO][4661] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.493 [INFO][4661] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0 calico-apiserver-77c7dddc8f- calico-apiserver 9811c23a-6a7b-41b7-8fa0-52983b899281 788 0 2025-03-19 11:33:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c7dddc8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c7dddc8f-lckq4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58d993c88e7 [] []}} ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.493 [INFO][4661] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.694 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" HandleID="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Workload="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.717 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" HandleID="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Workload="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400041b310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c7dddc8f-lckq4", "timestamp":"2025-03-19 11:33:35.694064463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.717 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.761 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.761 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.763 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.769 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.783 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.791 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.808 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.808 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.812 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.847 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.855 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.855 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" host="localhost" Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.855 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:33:35.878165 containerd[1461]: 2025-03-19 11:33:35.855 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" HandleID="k8s-pod-network.8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Workload="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.878841 containerd[1461]: 2025-03-19 11:33:35.860 [INFO][4661] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0", GenerateName:"calico-apiserver-77c7dddc8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9811c23a-6a7b-41b7-8fa0-52983b899281", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c7dddc8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c7dddc8f-lckq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58d993c88e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.878841 containerd[1461]: 2025-03-19 11:33:35.860 [INFO][4661] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.878841 containerd[1461]: 2025-03-19 11:33:35.860 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58d993c88e7 ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.878841 containerd[1461]: 2025-03-19 11:33:35.863 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.878841 containerd[1461]: 2025-03-19 11:33:35.863 [INFO][4661] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0", GenerateName:"calico-apiserver-77c7dddc8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9811c23a-6a7b-41b7-8fa0-52983b899281", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c7dddc8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b", Pod:"calico-apiserver-77c7dddc8f-lckq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58d993c88e7", MAC:"56:8c:54:f8:0c:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.878841 containerd[1461]: 2025-03-19 11:33:35.875 [INFO][4661] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-lckq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--lckq4-eth0" Mar 19 11:33:35.879324 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:33:35.891436 containerd[1461]: time="2025-03-19T11:33:35.891395193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zqr5d,Uid:bbfe9adc-4e2f-44ac-a3f4-b25842fbe645,Namespace:calico-system,Attempt:5,} returns sandbox id \"e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658\"" Mar 19 11:33:35.893770 containerd[1461]: time="2025-03-19T11:33:35.893657199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 19 11:33:35.900861 systemd-networkd[1395]: cali5ab640584ac: Link UP Mar 19 11:33:35.901685 systemd-networkd[1395]: cali5ab640584ac: Gained carrier Mar 19 11:33:35.909184 containerd[1461]: time="2025-03-19T11:33:35.908447863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:35.909184 containerd[1461]: time="2025-03-19T11:33:35.909037676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:35.909628 containerd[1461]: time="2025-03-19T11:33:35.909066839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.909628 containerd[1461]: time="2025-03-19T11:33:35.909185290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.418 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.466 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0 calico-apiserver-77c7dddc8f- calico-apiserver 498e0970-d5ce-4bd8-8d9d-336f0a003145 783 0 2025-03-19 11:33:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c7dddc8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c7dddc8f-6t24c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ab640584ac [] []}} ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.466 [INFO][4609] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.686 [INFO][4683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" HandleID="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Workload="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.717 [INFO][4683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" HandleID="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Workload="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000568e50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c7dddc8f-6t24c", "timestamp":"2025-03-19 11:33:35.686802443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.717 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.857 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.857 [INFO][4683] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.860 [INFO][4683] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.868 [INFO][4683] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.872 [INFO][4683] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.876 [INFO][4683] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.878 [INFO][4683] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.878 [INFO][4683] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.881 [INFO][4683] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.885 [INFO][4683] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.892 [INFO][4683] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.892 [INFO][4683] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" host="localhost" Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.892 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:33:35.921071 containerd[1461]: 2025-03-19 11:33:35.892 [INFO][4683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" HandleID="k8s-pod-network.3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Workload="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.922107 containerd[1461]: 2025-03-19 11:33:35.895 [INFO][4609] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0", GenerateName:"calico-apiserver-77c7dddc8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"498e0970-d5ce-4bd8-8d9d-336f0a003145", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c7dddc8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c7dddc8f-6t24c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ab640584ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.922107 containerd[1461]: 2025-03-19 11:33:35.895 [INFO][4609] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.922107 containerd[1461]: 2025-03-19 11:33:35.896 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ab640584ac ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.922107 containerd[1461]: 2025-03-19 11:33:35.903 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.922107 containerd[1461]: 2025-03-19 11:33:35.903 [INFO][4609] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0", GenerateName:"calico-apiserver-77c7dddc8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"498e0970-d5ce-4bd8-8d9d-336f0a003145", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c7dddc8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f", Pod:"calico-apiserver-77c7dddc8f-6t24c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ab640584ac", MAC:"16:1b:32:de:e8:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.922107 containerd[1461]: 2025-03-19 11:33:35.915 [INFO][4609] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f" Namespace="calico-apiserver" Pod="calico-apiserver-77c7dddc8f-6t24c" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c7dddc8f--6t24c-eth0" Mar 19 11:33:35.930117 systemd[1]: Started cri-containerd-8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b.scope - libcontainer container 8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b. Mar 19 11:33:35.937249 systemd-networkd[1395]: cali703f1fcd7dd: Link UP Mar 19 11:33:35.937476 systemd-networkd[1395]: cali703f1fcd7dd: Gained carrier Mar 19 11:33:35.949709 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.451 [INFO][4628] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.487 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0 calico-kube-controllers-689bbc887b- calico-system 0af97ae0-2493-4bbf-a605-0511940d25f4 787 0 2025-03-19 11:33:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:689bbc887b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-689bbc887b-2vs8c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali703f1fcd7dd [] []}} ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.488 [INFO][4628] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.694 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" HandleID="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Workload="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.717 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" HandleID="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Workload="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000280590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-689bbc887b-2vs8c", "timestamp":"2025-03-19 11:33:35.694328447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.717 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.893 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.893 [INFO][4721] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.896 [INFO][4721] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.907 [INFO][4721] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.914 [INFO][4721] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.916 [INFO][4721] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.919 [INFO][4721] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.919 [INFO][4721] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.921 [INFO][4721] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.926 [INFO][4721] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.932 [INFO][4721] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.932 [INFO][4721] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" host="localhost" Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.932 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:33:35.951376 containerd[1461]: 2025-03-19 11:33:35.932 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" HandleID="k8s-pod-network.a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Workload="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.951917 containerd[1461]: 2025-03-19 11:33:35.935 [INFO][4628] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0", GenerateName:"calico-kube-controllers-689bbc887b-", Namespace:"calico-system", SelfLink:"", UID:"0af97ae0-2493-4bbf-a605-0511940d25f4", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"689bbc887b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-689bbc887b-2vs8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali703f1fcd7dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.951917 containerd[1461]: 2025-03-19 11:33:35.935 [INFO][4628] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.951917 containerd[1461]: 2025-03-19 11:33:35.935 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali703f1fcd7dd ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.951917 containerd[1461]: 2025-03-19 11:33:35.937 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.951917 containerd[1461]: 2025-03-19 11:33:35.938 [INFO][4628] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0", GenerateName:"calico-kube-controllers-689bbc887b-", Namespace:"calico-system", SelfLink:"", UID:"0af97ae0-2493-4bbf-a605-0511940d25f4", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"689bbc887b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc", Pod:"calico-kube-controllers-689bbc887b-2vs8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali703f1fcd7dd", MAC:"c2:d3:7a:3e:3e:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:35.951917 containerd[1461]: 2025-03-19 11:33:35.948 [INFO][4628] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc" Namespace="calico-system" Pod="calico-kube-controllers-689bbc887b-2vs8c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--689bbc887b--2vs8c-eth0" Mar 19 11:33:35.954411 containerd[1461]: time="2025-03-19T11:33:35.953873270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:35.954411 containerd[1461]: time="2025-03-19T11:33:35.954264946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:35.954411 containerd[1461]: time="2025-03-19T11:33:35.954282908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.954411 containerd[1461]: time="2025-03-19T11:33:35.954371236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.979856 systemd[1]: Started cri-containerd-3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f.scope - libcontainer container 3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f. Mar 19 11:33:35.981915 containerd[1461]: time="2025-03-19T11:33:35.981841212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:35.981915 containerd[1461]: time="2025-03-19T11:33:35.981889856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:35.981915 containerd[1461]: time="2025-03-19T11:33:35.981900417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.982119 containerd[1461]: time="2025-03-19T11:33:35.981966143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:35.982500 containerd[1461]: time="2025-03-19T11:33:35.982467788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-lckq4,Uid:9811c23a-6a7b-41b7-8fa0-52983b899281,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b\"" Mar 19 11:33:35.989888 systemd-networkd[1395]: cali029e707087a: Link UP Mar 19 11:33:35.990965 systemd-networkd[1395]: cali029e707087a: Gained carrier Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.425 [INFO][4617] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.467 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0 coredns-7db6d8ff4d- kube-system 547e076a-acd7-4d2f-97da-f3027e556484 786 0 2025-03-19 11:33:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-zlsx7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali029e707087a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.467 [INFO][4617] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.686 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" HandleID="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Workload="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.718 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" HandleID="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Workload="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001218b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-zlsx7", "timestamp":"2025-03-19 11:33:35.686798043 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.718 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.932 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.933 [INFO][4684] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.935 [INFO][4684] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.949 [INFO][4684] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.954 [INFO][4684] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.957 [INFO][4684] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.959 [INFO][4684] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.959 [INFO][4684] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.961 [INFO][4684] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.968 [INFO][4684] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.977 [INFO][4684] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.978 [INFO][4684] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" host="localhost" Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.978 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:33:36.004438 containerd[1461]: 2025-03-19 11:33:35.978 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" HandleID="k8s-pod-network.b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Workload="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.005501 containerd[1461]: 2025-03-19 11:33:35.986 [INFO][4617] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"547e076a-acd7-4d2f-97da-f3027e556484", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-zlsx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali029e707087a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:36.005501 containerd[1461]: 2025-03-19 11:33:35.987 [INFO][4617] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.005501 containerd[1461]: 2025-03-19 11:33:35.987 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali029e707087a ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.005501 containerd[1461]: 2025-03-19 11:33:35.990 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.005501 containerd[1461]: 2025-03-19 11:33:35.990 [INFO][4617] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"547e076a-acd7-4d2f-97da-f3027e556484", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c", Pod:"coredns-7db6d8ff4d-zlsx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali029e707087a", MAC:"9a:cd:51:a5:c4:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:36.005501 containerd[1461]: 2025-03-19 11:33:36.001 [INFO][4617] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zlsx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zlsx7-eth0" Mar 19 11:33:36.006607 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:33:36.015882 systemd[1]: Started cri-containerd-a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc.scope - libcontainer container a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc. Mar 19 11:33:36.028544 systemd-networkd[1395]: cali08968b049af: Link UP Mar 19 11:33:36.028838 systemd-networkd[1395]: cali08968b049af: Gained carrier Mar 19 11:33:36.034879 containerd[1461]: time="2025-03-19T11:33:36.033256645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:36.034879 containerd[1461]: time="2025-03-19T11:33:36.033321130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:36.034879 containerd[1461]: time="2025-03-19T11:33:36.033337372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:36.034879 containerd[1461]: time="2025-03-19T11:33:36.033413819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:36.043544 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:33:36.043714 containerd[1461]: time="2025-03-19T11:33:36.043651763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c7dddc8f-6t24c,Uid:498e0970-d5ce-4bd8-8d9d-336f0a003145,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f\"" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.449 [INFO][4627] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.471 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0 coredns-7db6d8ff4d- kube-system 07fd37a8-ef23-49a6-a372-10e4ce8f9811 778 0 2025-03-19 11:33:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-7zd7r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08968b049af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.471 [INFO][4627] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.686 [INFO][4691] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" HandleID="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Workload="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.718 [INFO][4691] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" HandleID="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Workload="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fdf40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-7zd7r", "timestamp":"2025-03-19 11:33:35.686809764 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.718 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.978 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.978 [INFO][4691] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.980 [INFO][4691] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.987 [INFO][4691] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.994 [INFO][4691] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:35.996 [INFO][4691] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.000 [INFO][4691] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.000 [INFO][4691] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.004 [INFO][4691] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648 Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.009 [INFO][4691] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.018 [INFO][4691] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.019 [INFO][4691] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" host="localhost" Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.019 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 19 11:33:36.049621 containerd[1461]: 2025-03-19 11:33:36.019 [INFO][4691] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" HandleID="k8s-pod-network.28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Workload="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.051025 containerd[1461]: 2025-03-19 11:33:36.025 [INFO][4627] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"07fd37a8-ef23-49a6-a372-10e4ce8f9811", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-7zd7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08968b049af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:36.051025 containerd[1461]: 2025-03-19 11:33:36.025 [INFO][4627] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.051025 containerd[1461]: 2025-03-19 11:33:36.025 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08968b049af ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.051025 containerd[1461]: 2025-03-19 11:33:36.029 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.051025 containerd[1461]: 2025-03-19 11:33:36.029 [INFO][4627] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"07fd37a8-ef23-49a6-a372-10e4ce8f9811", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.March, 19, 11, 33, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648", Pod:"coredns-7db6d8ff4d-7zd7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08968b049af", MAC:"32:f7:92:86:73:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 19 11:33:36.051025 containerd[1461]: 2025-03-19 11:33:36.042 [INFO][4627] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7zd7r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7zd7r-eth0" Mar 19 11:33:36.060887 systemd[1]: Started cri-containerd-b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c.scope - libcontainer container b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c. Mar 19 11:33:36.071727 containerd[1461]: time="2025-03-19T11:33:36.071201718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:33:36.071727 containerd[1461]: time="2025-03-19T11:33:36.071251002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:33:36.071727 containerd[1461]: time="2025-03-19T11:33:36.071266044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:36.071727 containerd[1461]: time="2025-03-19T11:33:36.071332849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:33:36.073654 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:33:36.082577 containerd[1461]: time="2025-03-19T11:33:36.082525719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689bbc887b-2vs8c,Uid:0af97ae0-2493-4bbf-a605-0511940d25f4,Namespace:calico-system,Attempt:5,} returns sandbox id \"a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc\"" Mar 19 11:33:36.091894 systemd[1]: Started cri-containerd-28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648.scope - libcontainer container 28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648. Mar 19 11:33:36.095242 containerd[1461]: time="2025-03-19T11:33:36.095208679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zlsx7,Uid:547e076a-acd7-4d2f-97da-f3027e556484,Namespace:kube-system,Attempt:5,} returns sandbox id \"b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c\"" Mar 19 11:33:36.096040 kubelet[2648]: E0319 11:33:36.096018 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:36.098777 containerd[1461]: time="2025-03-19T11:33:36.098745032Z" level=info msg="CreateContainer within sandbox \"b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:33:36.104220 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:33:36.118574 containerd[1461]: time="2025-03-19T11:33:36.118496897Z" level=info msg="CreateContainer within sandbox \"b52707b6440d8b6b2ab62ba80309795ed33aaaad113f32f04baa5d513290d66c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3499c966549745e63589520e7959f19618e0d536f1409ecf273b2f7d47032e25\"" Mar 19 11:33:36.119233 containerd[1461]: time="2025-03-19T11:33:36.119163636Z" level=info msg="StartContainer for \"3499c966549745e63589520e7959f19618e0d536f1409ecf273b2f7d47032e25\"" Mar 19 11:33:36.123793 containerd[1461]: time="2025-03-19T11:33:36.123762843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7zd7r,Uid:07fd37a8-ef23-49a6-a372-10e4ce8f9811,Namespace:kube-system,Attempt:5,} returns sandbox id \"28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648\"" Mar 19 11:33:36.124364 kubelet[2648]: E0319 11:33:36.124338 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:36.127261 containerd[1461]: time="2025-03-19T11:33:36.127217988Z" level=info msg="CreateContainer within sandbox \"28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:33:36.137146 containerd[1461]: time="2025-03-19T11:33:36.137090460Z" level=info msg="CreateContainer within sandbox \"28691d73095a53b47b4f64b21de6c8d1c85321f483f04c5699a8c7b18263f648\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"923aef27106e4fff2c140053f0aa306fb3982dba2ab17bb657f436873598ff47\"" Mar 19 11:33:36.138136 containerd[1461]: time="2025-03-19T11:33:36.137670192Z" level=info msg="StartContainer for \"923aef27106e4fff2c140053f0aa306fb3982dba2ab17bb657f436873598ff47\"" Mar 19 11:33:36.147885 systemd[1]: Started cri-containerd-3499c966549745e63589520e7959f19618e0d536f1409ecf273b2f7d47032e25.scope - libcontainer container 3499c966549745e63589520e7959f19618e0d536f1409ecf273b2f7d47032e25. Mar 19 11:33:36.161820 systemd[1]: Started cri-containerd-923aef27106e4fff2c140053f0aa306fb3982dba2ab17bb657f436873598ff47.scope - libcontainer container 923aef27106e4fff2c140053f0aa306fb3982dba2ab17bb657f436873598ff47. Mar 19 11:33:36.177776 containerd[1461]: time="2025-03-19T11:33:36.177692688Z" level=info msg="StartContainer for \"3499c966549745e63589520e7959f19618e0d536f1409ecf273b2f7d47032e25\" returns successfully" Mar 19 11:33:36.193102 containerd[1461]: time="2025-03-19T11:33:36.193007962Z" level=info msg="StartContainer for \"923aef27106e4fff2c140053f0aa306fb3982dba2ab17bb657f436873598ff47\" returns successfully" Mar 19 11:33:36.266017 kubelet[2648]: E0319 11:33:36.265812 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:36.280867 kubelet[2648]: I0319 11:33:36.280296 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zlsx7" podStartSLOduration=23.280279114 podStartE2EDuration="23.280279114s" podCreationTimestamp="2025-03-19 11:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:33:36.2787599 +0000 UTC m=+37.426010499" watchObservedRunningTime="2025-03-19 11:33:36.280279114 +0000 UTC m=+37.427529713" Mar 19 11:33:36.283285 kubelet[2648]: I0319 11:33:36.283036 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:36.283891 kubelet[2648]: E0319 11:33:36.283859 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:36.284114 kubelet[2648]: E0319 11:33:36.283993 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:36.295294 kubelet[2648]: I0319 11:33:36.295057 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7zd7r" podStartSLOduration=23.295041978 podStartE2EDuration="23.295041978s" podCreationTimestamp="2025-03-19 11:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:33:36.294775755 +0000 UTC m=+37.442026354" watchObservedRunningTime="2025-03-19 11:33:36.295041978 +0000 UTC m=+37.442292577" Mar 19 11:33:36.890406 containerd[1461]: time="2025-03-19T11:33:36.890250497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:36.891887 containerd[1461]: time="2025-03-19T11:33:36.891639139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 19 11:33:36.893195 containerd[1461]: time="2025-03-19T11:33:36.892906491Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:36.895091 containerd[1461]: time="2025-03-19T11:33:36.895037480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:36.895781 containerd[1461]: time="2025-03-19T11:33:36.895744382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.00205482s" Mar 19 11:33:36.895781 containerd[1461]: time="2025-03-19T11:33:36.895778345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 19 11:33:36.897249 containerd[1461]: time="2025-03-19T11:33:36.897200551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 19 11:33:36.899304 containerd[1461]: time="2025-03-19T11:33:36.899047074Z" level=info msg="CreateContainer within sandbox \"e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 19 11:33:36.914238 containerd[1461]: time="2025-03-19T11:33:36.914200813Z" level=info msg="CreateContainer within sandbox \"e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"af74a92a534b05be4e75c846e9f30e699faea9c2aca2e3be1e08a185c702b0da\"" Mar 19 11:33:36.914877 containerd[1461]: time="2025-03-19T11:33:36.914852791Z" level=info msg="StartContainer for \"af74a92a534b05be4e75c846e9f30e699faea9c2aca2e3be1e08a185c702b0da\"" Mar 19 11:33:36.949873 systemd[1]: Started cri-containerd-af74a92a534b05be4e75c846e9f30e699faea9c2aca2e3be1e08a185c702b0da.scope - libcontainer container af74a92a534b05be4e75c846e9f30e699faea9c2aca2e3be1e08a185c702b0da. Mar 19 11:33:36.979382 containerd[1461]: time="2025-03-19T11:33:36.979340369Z" level=info msg="StartContainer for \"af74a92a534b05be4e75c846e9f30e699faea9c2aca2e3be1e08a185c702b0da\" returns successfully" Mar 19 11:33:37.123846 systemd-networkd[1395]: cali703f1fcd7dd: Gained IPv6LL Mar 19 11:33:37.288286 kubelet[2648]: E0319 11:33:37.287985 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:37.288286 kubelet[2648]: E0319 11:33:37.288240 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:37.315968 systemd-networkd[1395]: cali58d993c88e7: Gained IPv6LL Mar 19 11:33:37.507895 systemd-networkd[1395]: cali952f0c63678: Gained IPv6LL Mar 19 11:33:37.572824 systemd-networkd[1395]: cali08968b049af: Gained IPv6LL Mar 19 11:33:37.635844 systemd-networkd[1395]: cali029e707087a: Gained IPv6LL Mar 19 11:33:37.827921 systemd-networkd[1395]: cali5ab640584ac: Gained IPv6LL Mar 19 11:33:37.949246 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:43758.service - OpenSSH per-connection server daemon (10.0.0.1:43758). Mar 19 11:33:38.003400 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 43758 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:38.004617 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:38.009336 systemd-logind[1447]: New session 10 of user core. Mar 19 11:33:38.016836 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:33:38.036948 kubelet[2648]: I0319 11:33:38.036920 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:38.038465 kubelet[2648]: E0319 11:33:38.038392 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:38.147031 systemd[1]: run-containerd-runc-k8s.io-b497c0e6f5269e6828dae930ad69dbd65284c8a5c5215d9c5c7637e17e77dbff-runc.6HpVPp.mount: Deactivated successfully. Mar 19 11:33:38.270997 sshd[5315]: Connection closed by 10.0.0.1 port 43758 Mar 19 11:33:38.271550 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:38.282216 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:43758.service: Deactivated successfully. Mar 19 11:33:38.283891 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:33:38.285276 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:33:38.293218 kubelet[2648]: E0319 11:33:38.291816 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:38.293218 kubelet[2648]: E0319 11:33:38.291894 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:38.294075 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:43768.service - OpenSSH per-connection server daemon (10.0.0.1:43768). Mar 19 11:33:38.297473 systemd-logind[1447]: Removed session 10. Mar 19 11:33:38.335884 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 43768 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:38.337558 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:38.343254 systemd-logind[1447]: New session 11 of user core. Mar 19 11:33:38.350858 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:33:38.631396 sshd[5382]: Connection closed by 10.0.0.1 port 43768 Mar 19 11:33:38.634676 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:38.650068 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:43774.service - OpenSSH per-connection server daemon (10.0.0.1:43774). Mar 19 11:33:38.650678 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:43768.service: Deactivated successfully. Mar 19 11:33:38.654868 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:33:38.657456 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:33:38.666521 systemd-logind[1447]: Removed session 11. Mar 19 11:33:38.789095 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 43774 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:38.789991 containerd[1461]: time="2025-03-19T11:33:38.789959776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:38.791080 containerd[1461]: time="2025-03-19T11:33:38.790927137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 19 11:33:38.791819 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:38.793808 containerd[1461]: time="2025-03-19T11:33:38.793774176Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:38.796971 containerd[1461]: time="2025-03-19T11:33:38.796936881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:38.798228 containerd[1461]: time="2025-03-19T11:33:38.798196387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.900967313s" Mar 19 11:33:38.798283 containerd[1461]: time="2025-03-19T11:33:38.798229950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 19 11:33:38.799339 systemd-logind[1447]: New session 12 of user core. Mar 19 11:33:38.799686 containerd[1461]: time="2025-03-19T11:33:38.799651029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 19 11:33:38.801979 containerd[1461]: time="2025-03-19T11:33:38.801913578Z" level=info msg="CreateContainer within sandbox \"8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 19 11:33:38.806905 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:33:38.818892 containerd[1461]: time="2025-03-19T11:33:38.818860879Z" level=info msg="CreateContainer within sandbox \"8a0d13a5702065955b46ad17d4746fd811d7b97387ea703b61b1ad6990c50e9b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dfce2fc11448945c11e8b7034be7d4d2e7077cff00e1ae652924cd8cd73a8f75\"" Mar 19 11:33:38.819723 containerd[1461]: time="2025-03-19T11:33:38.819235351Z" level=info msg="StartContainer for \"dfce2fc11448945c11e8b7034be7d4d2e7077cff00e1ae652924cd8cd73a8f75\"" Mar 19 11:33:38.870877 systemd[1]: Started cri-containerd-dfce2fc11448945c11e8b7034be7d4d2e7077cff00e1ae652924cd8cd73a8f75.scope - libcontainer container dfce2fc11448945c11e8b7034be7d4d2e7077cff00e1ae652924cd8cd73a8f75. Mar 19 11:33:38.922831 containerd[1461]: time="2025-03-19T11:33:38.922726427Z" level=info msg="StartContainer for \"dfce2fc11448945c11e8b7034be7d4d2e7077cff00e1ae652924cd8cd73a8f75\" returns successfully" Mar 19 11:33:38.973535 sshd[5429]: Connection closed by 10.0.0.1 port 43774 Mar 19 11:33:38.973790 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:38.976305 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:33:38.977116 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:33:38.977285 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:43774.service: Deactivated successfully. Mar 19 11:33:38.980828 systemd-logind[1447]: Removed session 12. Mar 19 11:33:39.047508 containerd[1461]: time="2025-03-19T11:33:39.047462991Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:39.047899 containerd[1461]: time="2025-03-19T11:33:39.047856224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 19 11:33:39.050487 containerd[1461]: time="2025-03-19T11:33:39.050458916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 250.777886ms" Mar 19 11:33:39.050539 containerd[1461]: time="2025-03-19T11:33:39.050489879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 19 11:33:39.052070 containerd[1461]: time="2025-03-19T11:33:39.052008803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 19 11:33:39.054584 containerd[1461]: time="2025-03-19T11:33:39.054553371Z" level=info msg="CreateContainer within sandbox \"3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 19 11:33:39.066941 containerd[1461]: time="2025-03-19T11:33:39.066906062Z" level=info msg="CreateContainer within sandbox \"3c297c24c505dfea6191adeb8adbf57ea8583e993faacad00a805895397d133f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e5bdc4b8c4e0ce0242199ef0907850fc465fedd8b5e872b6be6707cc5da7a44\"" Mar 19 11:33:39.067259 containerd[1461]: time="2025-03-19T11:33:39.067231688Z" level=info msg="StartContainer for \"9e5bdc4b8c4e0ce0242199ef0907850fc465fedd8b5e872b6be6707cc5da7a44\"" Mar 19 11:33:39.097047 systemd[1]: Started cri-containerd-9e5bdc4b8c4e0ce0242199ef0907850fc465fedd8b5e872b6be6707cc5da7a44.scope - libcontainer container 9e5bdc4b8c4e0ce0242199ef0907850fc465fedd8b5e872b6be6707cc5da7a44. Mar 19 11:33:39.137166 containerd[1461]: time="2025-03-19T11:33:39.137124244Z" level=info msg="StartContainer for \"9e5bdc4b8c4e0ce0242199ef0907850fc465fedd8b5e872b6be6707cc5da7a44\" returns successfully" Mar 19 11:33:39.306775 kubelet[2648]: E0319 11:33:39.304431 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:39.332298 kubelet[2648]: I0319 11:33:39.332237 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c7dddc8f-lckq4" podStartSLOduration=15.518219603 podStartE2EDuration="18.33222056s" podCreationTimestamp="2025-03-19 11:33:21 +0000 UTC" firstStartedPulling="2025-03-19 11:33:35.984863686 +0000 UTC m=+37.132114245" lastFinishedPulling="2025-03-19 11:33:38.798864643 +0000 UTC m=+39.946115202" observedRunningTime="2025-03-19 11:33:39.314278452 +0000 UTC m=+40.461529051" watchObservedRunningTime="2025-03-19 11:33:39.33222056 +0000 UTC m=+40.479471119" Mar 19 11:33:40.309729 kubelet[2648]: I0319 11:33:40.306263 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:40.309729 kubelet[2648]: I0319 11:33:40.306509 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:40.455288 containerd[1461]: time="2025-03-19T11:33:40.455237647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:40.456687 containerd[1461]: time="2025-03-19T11:33:40.456586195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 19 11:33:40.457499 containerd[1461]: time="2025-03-19T11:33:40.457467705Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:40.460209 containerd[1461]: time="2025-03-19T11:33:40.460169201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:40.460748 containerd[1461]: time="2025-03-19T11:33:40.460710764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.408622314s" Mar 19 11:33:40.460813 containerd[1461]: time="2025-03-19T11:33:40.460749807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 19 11:33:40.461989 containerd[1461]: time="2025-03-19T11:33:40.461962864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 19 11:33:40.469306 containerd[1461]: time="2025-03-19T11:33:40.469271087Z" level=info msg="CreateContainer within sandbox \"a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 19 11:33:40.481281 containerd[1461]: time="2025-03-19T11:33:40.481239563Z" level=info msg="CreateContainer within sandbox \"a19e74c721d8832323b6cf7bbc027660c5ed7c43d92696d275dcceb0ba7990bc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d57be6e303c2c39cee70aa292aefc3226c7d651d5817fe8c36bb5351465ab02\"" Mar 19 11:33:40.481717 containerd[1461]: time="2025-03-19T11:33:40.481681679Z" level=info msg="StartContainer for \"7d57be6e303c2c39cee70aa292aefc3226c7d651d5817fe8c36bb5351465ab02\"" Mar 19 11:33:40.516867 systemd[1]: Started cri-containerd-7d57be6e303c2c39cee70aa292aefc3226c7d651d5817fe8c36bb5351465ab02.scope - libcontainer container 7d57be6e303c2c39cee70aa292aefc3226c7d651d5817fe8c36bb5351465ab02. Mar 19 11:33:40.552640 containerd[1461]: time="2025-03-19T11:33:40.552602742Z" level=info msg="StartContainer for \"7d57be6e303c2c39cee70aa292aefc3226c7d651d5817fe8c36bb5351465ab02\" returns successfully" Mar 19 11:33:41.350133 kubelet[2648]: I0319 11:33:41.350005 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-689bbc887b-2vs8c" podStartSLOduration=15.97201065 podStartE2EDuration="20.349953184s" podCreationTimestamp="2025-03-19 11:33:21 +0000 UTC" firstStartedPulling="2025-03-19 11:33:36.083940964 +0000 UTC m=+37.231191523" lastFinishedPulling="2025-03-19 11:33:40.461883458 +0000 UTC m=+41.609134057" observedRunningTime="2025-03-19 11:33:41.349867458 +0000 UTC m=+42.497118057" watchObservedRunningTime="2025-03-19 11:33:41.349953184 +0000 UTC m=+42.497203783" Mar 19 11:33:41.353323 kubelet[2648]: I0319 11:33:41.352805 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c7dddc8f-6t24c" podStartSLOduration=17.346447499 podStartE2EDuration="20.352786445s" podCreationTimestamp="2025-03-19 11:33:21 +0000 UTC" firstStartedPulling="2025-03-19 11:33:36.044899874 +0000 UTC m=+37.192150473" lastFinishedPulling="2025-03-19 11:33:39.05123882 +0000 UTC m=+40.198489419" observedRunningTime="2025-03-19 11:33:39.332060787 +0000 UTC m=+40.479311386" watchObservedRunningTime="2025-03-19 11:33:41.352786445 +0000 UTC m=+42.500037124" Mar 19 11:33:41.521607 containerd[1461]: time="2025-03-19T11:33:41.521559138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:41.522350 containerd[1461]: time="2025-03-19T11:33:41.522232150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 19 11:33:41.523621 containerd[1461]: time="2025-03-19T11:33:41.523555054Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:41.525993 containerd[1461]: time="2025-03-19T11:33:41.525935920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:41.527103 containerd[1461]: time="2025-03-19T11:33:41.526748863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.064754917s" Mar 19 11:33:41.527103 containerd[1461]: time="2025-03-19T11:33:41.526784666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 19 11:33:41.530311 containerd[1461]: time="2025-03-19T11:33:41.530276818Z" level=info msg="CreateContainer within sandbox \"e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 19 11:33:41.543261 containerd[1461]: time="2025-03-19T11:33:41.543141742Z" level=info msg="CreateContainer within sandbox \"e2d0b7910caa9790611b5ee33283e67a08a75fe334ff7b5f8348e7939eede658\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4143573ab5d0ef66745dd4a605e5995129fea9f5714ba360dbfe82d8170e5eb6\"" Mar 19 11:33:41.544746 containerd[1461]: time="2025-03-19T11:33:41.543735789Z" level=info msg="StartContainer for \"4143573ab5d0ef66745dd4a605e5995129fea9f5714ba360dbfe82d8170e5eb6\"" Mar 19 11:33:41.568110 systemd[1]: run-containerd-runc-k8s.io-4143573ab5d0ef66745dd4a605e5995129fea9f5714ba360dbfe82d8170e5eb6-runc.68jwRf.mount: Deactivated successfully. Mar 19 11:33:41.584859 systemd[1]: Started cri-containerd-4143573ab5d0ef66745dd4a605e5995129fea9f5714ba360dbfe82d8170e5eb6.scope - libcontainer container 4143573ab5d0ef66745dd4a605e5995129fea9f5714ba360dbfe82d8170e5eb6. Mar 19 11:33:41.610110 containerd[1461]: time="2025-03-19T11:33:41.609822227Z" level=info msg="StartContainer for \"4143573ab5d0ef66745dd4a605e5995129fea9f5714ba360dbfe82d8170e5eb6\" returns successfully" Mar 19 11:33:42.028636 kubelet[2648]: I0319 11:33:42.028314 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:42.043800 kubelet[2648]: I0319 11:33:42.043692 2648 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 19 11:33:42.056113 kubelet[2648]: I0319 11:33:42.055642 2648 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 19 11:33:42.329819 kubelet[2648]: I0319 11:33:42.329631 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zqr5d" podStartSLOduration=15.694502463 podStartE2EDuration="21.329613168s" podCreationTimestamp="2025-03-19 11:33:21 +0000 UTC" firstStartedPulling="2025-03-19 11:33:35.893421577 +0000 UTC m=+37.040672176" lastFinishedPulling="2025-03-19 11:33:41.528532282 +0000 UTC m=+42.675782881" observedRunningTime="2025-03-19 11:33:42.329473678 +0000 UTC m=+43.476724277" watchObservedRunningTime="2025-03-19 11:33:42.329613168 +0000 UTC m=+43.476863767" Mar 19 11:33:42.550000 kubelet[2648]: I0319 11:33:42.549947 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:43.986048 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:33344.service - OpenSSH per-connection server daemon (10.0.0.1:33344). Mar 19 11:33:44.037437 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 33344 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:44.039266 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:44.043933 systemd-logind[1447]: New session 13 of user core. Mar 19 11:33:44.050867 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:33:44.252812 sshd[5761]: Connection closed by 10.0.0.1 port 33344 Mar 19 11:33:44.253100 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:44.264751 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:33344.service: Deactivated successfully. Mar 19 11:33:44.266297 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:33:44.267460 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:33:44.268676 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:33358.service - OpenSSH per-connection server daemon (10.0.0.1:33358). Mar 19 11:33:44.269451 systemd-logind[1447]: Removed session 13. Mar 19 11:33:44.308906 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 33358 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:44.310030 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:44.313670 systemd-logind[1447]: New session 14 of user core. Mar 19 11:33:44.319829 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:33:44.522070 sshd[5778]: Connection closed by 10.0.0.1 port 33358 Mar 19 11:33:44.522751 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:44.541733 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:33358.service: Deactivated successfully. Mar 19 11:33:44.543278 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:33:44.543979 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:33:44.554052 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:33366.service - OpenSSH per-connection server daemon (10.0.0.1:33366). Mar 19 11:33:44.555435 systemd-logind[1447]: Removed session 14. Mar 19 11:33:44.593353 sshd[5788]: Accepted publickey for core from 10.0.0.1 port 33366 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:44.594408 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:44.598068 systemd-logind[1447]: New session 15 of user core. Mar 19 11:33:44.605834 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:33:46.035545 sshd[5793]: Connection closed by 10.0.0.1 port 33366 Mar 19 11:33:46.036372 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:46.047863 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:33366.service: Deactivated successfully. Mar 19 11:33:46.051394 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:33:46.053073 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:33:46.064012 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:33382.service - OpenSSH per-connection server daemon (10.0.0.1:33382). Mar 19 11:33:46.066974 systemd-logind[1447]: Removed session 15. Mar 19 11:33:46.107690 sshd[5840]: Accepted publickey for core from 10.0.0.1 port 33382 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:46.108900 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:46.113408 systemd-logind[1447]: New session 16 of user core. Mar 19 11:33:46.124851 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:33:46.429865 sshd[5849]: Connection closed by 10.0.0.1 port 33382 Mar 19 11:33:46.430669 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:46.442218 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:33382.service: Deactivated successfully. Mar 19 11:33:46.445484 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:33:46.446689 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:33:46.455959 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:33396.service - OpenSSH per-connection server daemon (10.0.0.1:33396). Mar 19 11:33:46.457728 systemd-logind[1447]: Removed session 16. Mar 19 11:33:46.502776 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 33396 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:46.504096 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:46.509424 systemd-logind[1447]: New session 17 of user core. Mar 19 11:33:46.515881 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:33:46.652437 sshd[5878]: Connection closed by 10.0.0.1 port 33396 Mar 19 11:33:46.652966 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:46.656004 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:33396.service: Deactivated successfully. Mar 19 11:33:46.658235 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:33:46.659466 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:33:46.660232 systemd-logind[1447]: Removed session 17. Mar 19 11:33:48.804854 kubelet[2648]: I0319 11:33:48.804792 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:33:48.805478 kubelet[2648]: E0319 11:33:48.805454 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:48.965724 kernel: bpftool[5961]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 19 11:33:49.124510 systemd-networkd[1395]: vxlan.calico: Link UP Mar 19 11:33:49.124520 systemd-networkd[1395]: vxlan.calico: Gained carrier Mar 19 11:33:49.334712 kubelet[2648]: E0319 11:33:49.334674 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:33:50.691848 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Mar 19 11:33:51.670626 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:33406.service - OpenSSH per-connection server daemon (10.0.0.1:33406). Mar 19 11:33:51.717950 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 33406 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:51.719131 sshd-session[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:51.723545 systemd-logind[1447]: New session 18 of user core. Mar 19 11:33:51.737906 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:33:51.926308 sshd[6093]: Connection closed by 10.0.0.1 port 33406 Mar 19 11:33:51.926328 sshd-session[6091]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:51.929253 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:33406.service: Deactivated successfully. Mar 19 11:33:51.931760 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:33:51.931848 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:33:51.933584 systemd-logind[1447]: Removed session 18. Mar 19 11:33:56.942223 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:56622.service - OpenSSH per-connection server daemon (10.0.0.1:56622). Mar 19 11:33:56.982805 sshd[6115]: Accepted publickey for core from 10.0.0.1 port 56622 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:33:56.984049 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:56.988204 systemd-logind[1447]: New session 19 of user core. Mar 19 11:33:57.000865 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:33:57.159317 sshd[6117]: Connection closed by 10.0.0.1 port 56622 Mar 19 11:33:57.159641 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:57.163030 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:56622.service: Deactivated successfully. Mar 19 11:33:57.165341 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:33:57.166099 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:33:57.167200 systemd-logind[1447]: Removed session 19. Mar 19 11:33:58.938060 containerd[1461]: time="2025-03-19T11:33:58.938022270Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:58.938475 containerd[1461]: time="2025-03-19T11:33:58.938131596Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:58.938475 containerd[1461]: time="2025-03-19T11:33:58.938142837Z" level=info msg="StopPodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:58.938475 containerd[1461]: time="2025-03-19T11:33:58.938457976Z" level=info msg="RemovePodSandbox for \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:58.940673 containerd[1461]: time="2025-03-19T11:33:58.940391132Z" level=info msg="Forcibly stopping sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\"" Mar 19 11:33:58.940673 containerd[1461]: time="2025-03-19T11:33:58.940489138Z" level=info msg="TearDown network for sandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" successfully" Mar 19 11:33:58.949029 containerd[1461]: time="2025-03-19T11:33:58.948994608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.949124 containerd[1461]: time="2025-03-19T11:33:58.949067772Z" level=info msg="RemovePodSandbox \"a271a150747bbb4020cb3bf52b829ce124a99edd063d3622eb347b35dd8283f7\" returns successfully" Mar 19 11:33:58.949546 containerd[1461]: time="2025-03-19T11:33:58.949520079Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:58.949628 containerd[1461]: time="2025-03-19T11:33:58.949610725Z" level=info msg="TearDown network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" successfully" Mar 19 11:33:58.949661 containerd[1461]: time="2025-03-19T11:33:58.949627166Z" level=info msg="StopPodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" returns successfully" Mar 19 11:33:58.949995 containerd[1461]: time="2025-03-19T11:33:58.949963586Z" level=info msg="RemovePodSandbox for \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:58.949995 containerd[1461]: time="2025-03-19T11:33:58.949996268Z" level=info msg="Forcibly stopping sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\"" Mar 19 11:33:58.950081 containerd[1461]: time="2025-03-19T11:33:58.950053271Z" level=info msg="TearDown network for sandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" successfully" Mar 19 11:33:58.952647 containerd[1461]: time="2025-03-19T11:33:58.952619625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.952747 containerd[1461]: time="2025-03-19T11:33:58.952666788Z" level=info msg="RemovePodSandbox \"7f5356f870b478e27cbfd8ec9b3968ef3f1b3051565f750e5f561cbe82e41997\" returns successfully" Mar 19 11:33:58.953193 containerd[1461]: time="2025-03-19T11:33:58.953022970Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" Mar 19 11:33:58.953193 containerd[1461]: time="2025-03-19T11:33:58.953108255Z" level=info msg="TearDown network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" successfully" Mar 19 11:33:58.953193 containerd[1461]: time="2025-03-19T11:33:58.953119895Z" level=info msg="StopPodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" returns successfully" Mar 19 11:33:58.953412 containerd[1461]: time="2025-03-19T11:33:58.953385671Z" level=info msg="RemovePodSandbox for \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" Mar 19 11:33:58.953449 containerd[1461]: time="2025-03-19T11:33:58.953415553Z" level=info msg="Forcibly stopping sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\"" Mar 19 11:33:58.953512 containerd[1461]: time="2025-03-19T11:33:58.953480477Z" level=info msg="TearDown network for sandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" successfully" Mar 19 11:33:58.956204 containerd[1461]: time="2025-03-19T11:33:58.956167238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.956272 containerd[1461]: time="2025-03-19T11:33:58.956215401Z" level=info msg="RemovePodSandbox \"6b19b70bd9195dc8b29dd3079b94102436dd01dfd5a82191c81d04d9d6117984\" returns successfully" Mar 19 11:33:58.956655 containerd[1461]: time="2025-03-19T11:33:58.956580223Z" level=info msg="StopPodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\"" Mar 19 11:33:58.956763 containerd[1461]: time="2025-03-19T11:33:58.956661868Z" level=info msg="TearDown network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" successfully" Mar 19 11:33:58.956763 containerd[1461]: time="2025-03-19T11:33:58.956673669Z" level=info msg="StopPodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" returns successfully" Mar 19 11:33:58.957739 containerd[1461]: time="2025-03-19T11:33:58.957009569Z" level=info msg="RemovePodSandbox for \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\"" Mar 19 11:33:58.957739 containerd[1461]: time="2025-03-19T11:33:58.957036490Z" level=info msg="Forcibly stopping sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\"" Mar 19 11:33:58.957739 containerd[1461]: time="2025-03-19T11:33:58.957103094Z" level=info msg="TearDown network for sandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" successfully" Mar 19 11:33:58.960352 containerd[1461]: time="2025-03-19T11:33:58.960324488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.960496 containerd[1461]: time="2025-03-19T11:33:58.960459656Z" level=info msg="RemovePodSandbox \"e5736145726a2c7b6ec32dec406d3b9c3199269cf67655f2990502c0e3b48af4\" returns successfully" Mar 19 11:33:58.961176 containerd[1461]: time="2025-03-19T11:33:58.960911723Z" level=info msg="StopPodSandbox for \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\"" Mar 19 11:33:58.961352 containerd[1461]: time="2025-03-19T11:33:58.961331548Z" level=info msg="TearDown network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\" successfully" Mar 19 11:33:58.961525 containerd[1461]: time="2025-03-19T11:33:58.961504398Z" level=info msg="StopPodSandbox for \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\" returns successfully" Mar 19 11:33:58.961971 containerd[1461]: time="2025-03-19T11:33:58.961896622Z" level=info msg="RemovePodSandbox for \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\"" Mar 19 11:33:58.962115 containerd[1461]: time="2025-03-19T11:33:58.962095674Z" level=info msg="Forcibly stopping sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\"" Mar 19 11:33:58.962386 containerd[1461]: time="2025-03-19T11:33:58.962356770Z" level=info msg="TearDown network for sandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\" successfully" Mar 19 11:33:58.965598 containerd[1461]: time="2025-03-19T11:33:58.965563722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.965747 containerd[1461]: time="2025-03-19T11:33:58.965692730Z" level=info msg="RemovePodSandbox \"e6f1dfc888c1fc0260441b35a4d99de4a5bfbcce3f78bb3e4de1bd2d4d9f0914\" returns successfully" Mar 19 11:33:58.967198 containerd[1461]: time="2025-03-19T11:33:58.966571702Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:58.967891 containerd[1461]: time="2025-03-19T11:33:58.967386871Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:58.967891 containerd[1461]: time="2025-03-19T11:33:58.967410993Z" level=info msg="StopPodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:58.968431 containerd[1461]: time="2025-03-19T11:33:58.968407213Z" level=info msg="RemovePodSandbox for \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:58.968470 containerd[1461]: time="2025-03-19T11:33:58.968435214Z" level=info msg="Forcibly stopping sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\"" Mar 19 11:33:58.969338 containerd[1461]: time="2025-03-19T11:33:58.969282265Z" level=info msg="TearDown network for sandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" successfully" Mar 19 11:33:58.976780 containerd[1461]: time="2025-03-19T11:33:58.976723031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.976856 containerd[1461]: time="2025-03-19T11:33:58.976810997Z" level=info msg="RemovePodSandbox \"d677a002635c0165fe17e58980ce8c57b6dd2922ea0d5c3e100c14d10e4fb4dc\" returns successfully" Mar 19 11:33:58.977361 containerd[1461]: time="2025-03-19T11:33:58.977339428Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:58.977442 containerd[1461]: time="2025-03-19T11:33:58.977426554Z" level=info msg="TearDown network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" successfully" Mar 19 11:33:58.977474 containerd[1461]: time="2025-03-19T11:33:58.977441155Z" level=info msg="StopPodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" returns successfully" Mar 19 11:33:58.977759 containerd[1461]: time="2025-03-19T11:33:58.977735772Z" level=info msg="RemovePodSandbox for \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:58.977791 containerd[1461]: time="2025-03-19T11:33:58.977765134Z" level=info msg="Forcibly stopping sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\"" Mar 19 11:33:58.977843 containerd[1461]: time="2025-03-19T11:33:58.977828378Z" level=info msg="TearDown network for sandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" successfully" Mar 19 11:33:58.988528 containerd[1461]: time="2025-03-19T11:33:58.988486977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:58.988599 containerd[1461]: time="2025-03-19T11:33:58.988550221Z" level=info msg="RemovePodSandbox \"f4c258a44ef70b0c750a08f77d343f2b9d903d24f8e17774fa46e3fd0fe67027\" returns successfully" Mar 19 11:33:58.989038 containerd[1461]: time="2025-03-19T11:33:58.988991728Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" Mar 19 11:33:58.989100 containerd[1461]: time="2025-03-19T11:33:58.989080573Z" level=info msg="TearDown network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" successfully" Mar 19 11:33:58.989100 containerd[1461]: time="2025-03-19T11:33:58.989095254Z" level=info msg="StopPodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" returns successfully" Mar 19 11:33:58.989457 containerd[1461]: time="2025-03-19T11:33:58.989424833Z" level=info msg="RemovePodSandbox for \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" Mar 19 11:33:58.989457 containerd[1461]: time="2025-03-19T11:33:58.989453355Z" level=info msg="Forcibly stopping sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\"" Mar 19 11:33:58.989531 containerd[1461]: time="2025-03-19T11:33:58.989510599Z" level=info msg="TearDown network for sandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" successfully" Mar 19 11:33:59.005528 containerd[1461]: time="2025-03-19T11:33:59.005469033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.005658 containerd[1461]: time="2025-03-19T11:33:59.005540478Z" level=info msg="RemovePodSandbox \"6360b95202cb5ae95ad671cd9efbb7815a5fd51b100536aff81e960b6b3f01de\" returns successfully" Mar 19 11:33:59.006035 containerd[1461]: time="2025-03-19T11:33:59.005998785Z" level=info msg="StopPodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\"" Mar 19 11:33:59.006122 containerd[1461]: time="2025-03-19T11:33:59.006105831Z" level=info msg="TearDown network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" successfully" Mar 19 11:33:59.006122 containerd[1461]: time="2025-03-19T11:33:59.006120752Z" level=info msg="StopPodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" returns successfully" Mar 19 11:33:59.006733 containerd[1461]: time="2025-03-19T11:33:59.006530496Z" level=info msg="RemovePodSandbox for \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\"" Mar 19 11:33:59.006733 containerd[1461]: time="2025-03-19T11:33:59.006563818Z" level=info msg="Forcibly stopping sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\"" Mar 19 11:33:59.006733 containerd[1461]: time="2025-03-19T11:33:59.006629662Z" level=info msg="TearDown network for sandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" successfully" Mar 19 11:33:59.009546 containerd[1461]: time="2025-03-19T11:33:59.009512354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.009634 containerd[1461]: time="2025-03-19T11:33:59.009565597Z" level=info msg="RemovePodSandbox \"87b074bdfc6a37cb8a6075d5957bdaac5d0be6ecd6f1d6975da453edb5c67411\" returns successfully" Mar 19 11:33:59.009938 containerd[1461]: time="2025-03-19T11:33:59.009912937Z" level=info msg="StopPodSandbox for \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\"" Mar 19 11:33:59.010032 containerd[1461]: time="2025-03-19T11:33:59.010015304Z" level=info msg="TearDown network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\" successfully" Mar 19 11:33:59.010087 containerd[1461]: time="2025-03-19T11:33:59.010031264Z" level=info msg="StopPodSandbox for \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\" returns successfully" Mar 19 11:33:59.010427 containerd[1461]: time="2025-03-19T11:33:59.010402487Z" level=info msg="RemovePodSandbox for \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\"" Mar 19 11:33:59.010461 containerd[1461]: time="2025-03-19T11:33:59.010432008Z" level=info msg="Forcibly stopping sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\"" Mar 19 11:33:59.010630 containerd[1461]: time="2025-03-19T11:33:59.010496412Z" level=info msg="TearDown network for sandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\" successfully" Mar 19 11:33:59.013338 containerd[1461]: time="2025-03-19T11:33:59.013290938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.013418 containerd[1461]: time="2025-03-19T11:33:59.013354342Z" level=info msg="RemovePodSandbox \"7eec58c9fd29cabbfe11149ab33ce8bdd1324ff98dca8b9b9983a8f40c9571d2\" returns successfully" Mar 19 11:33:59.013801 containerd[1461]: time="2025-03-19T11:33:59.013769487Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:59.013941 containerd[1461]: time="2025-03-19T11:33:59.013865092Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:59.013941 containerd[1461]: time="2025-03-19T11:33:59.013931696Z" level=info msg="StopPodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:59.014275 containerd[1461]: time="2025-03-19T11:33:59.014253955Z" level=info msg="RemovePodSandbox for \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:59.014321 containerd[1461]: time="2025-03-19T11:33:59.014277757Z" level=info msg="Forcibly stopping sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\"" Mar 19 11:33:59.014365 containerd[1461]: time="2025-03-19T11:33:59.014348401Z" level=info msg="TearDown network for sandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" successfully" Mar 19 11:33:59.022538 containerd[1461]: time="2025-03-19T11:33:59.022493685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.022594 containerd[1461]: time="2025-03-19T11:33:59.022555569Z" level=info msg="RemovePodSandbox \"3a8b4600d211c68aa8ab7780f6c65651e1ee6c2c33293d7409e8e7446306e948\" returns successfully" Mar 19 11:33:59.023021 containerd[1461]: time="2025-03-19T11:33:59.022993635Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:59.023112 containerd[1461]: time="2025-03-19T11:33:59.023095041Z" level=info msg="TearDown network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" successfully" Mar 19 11:33:59.023112 containerd[1461]: time="2025-03-19T11:33:59.023109922Z" level=info msg="StopPodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" returns successfully" Mar 19 11:33:59.023427 containerd[1461]: time="2025-03-19T11:33:59.023398259Z" level=info msg="RemovePodSandbox for \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:59.024173 containerd[1461]: time="2025-03-19T11:33:59.023523106Z" level=info msg="Forcibly stopping sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\"" Mar 19 11:33:59.024173 containerd[1461]: time="2025-03-19T11:33:59.023593430Z" level=info msg="TearDown network for sandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" successfully" Mar 19 11:33:59.026018 containerd[1461]: time="2025-03-19T11:33:59.025972252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.026059 containerd[1461]: time="2025-03-19T11:33:59.026033175Z" level=info msg="RemovePodSandbox \"d607840114d2050d57bda6c8b2f144d1d6026a1943ef7e56b3f301a2b721b468\" returns successfully" Mar 19 11:33:59.026416 containerd[1461]: time="2025-03-19T11:33:59.026367075Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" Mar 19 11:33:59.026490 containerd[1461]: time="2025-03-19T11:33:59.026474202Z" level=info msg="TearDown network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" successfully" Mar 19 11:33:59.026522 containerd[1461]: time="2025-03-19T11:33:59.026488883Z" level=info msg="StopPodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" returns successfully" Mar 19 11:33:59.026766 containerd[1461]: time="2025-03-19T11:33:59.026742298Z" level=info msg="RemovePodSandbox for \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" Mar 19 11:33:59.026800 containerd[1461]: time="2025-03-19T11:33:59.026771539Z" level=info msg="Forcibly stopping sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\"" Mar 19 11:33:59.026848 containerd[1461]: time="2025-03-19T11:33:59.026834023Z" level=info msg="TearDown network for sandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" successfully" Mar 19 11:33:59.029288 containerd[1461]: time="2025-03-19T11:33:59.029250047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.029330 containerd[1461]: time="2025-03-19T11:33:59.029305730Z" level=info msg="RemovePodSandbox \"8b146114449502d8e4cc432b68acc734026e0ce4ee198485e9118c4a517a1a59\" returns successfully" Mar 19 11:33:59.029609 containerd[1461]: time="2025-03-19T11:33:59.029576266Z" level=info msg="StopPodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\"" Mar 19 11:33:59.029690 containerd[1461]: time="2025-03-19T11:33:59.029669832Z" level=info msg="TearDown network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" successfully" Mar 19 11:33:59.029735 containerd[1461]: time="2025-03-19T11:33:59.029689073Z" level=info msg="StopPodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" returns successfully" Mar 19 11:33:59.030008 containerd[1461]: time="2025-03-19T11:33:59.029959809Z" level=info msg="RemovePodSandbox for \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\"" Mar 19 11:33:59.030008 containerd[1461]: time="2025-03-19T11:33:59.029999931Z" level=info msg="Forcibly stopping sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\"" Mar 19 11:33:59.030120 containerd[1461]: time="2025-03-19T11:33:59.030078856Z" level=info msg="TearDown network for sandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" successfully" Mar 19 11:33:59.032429 containerd[1461]: time="2025-03-19T11:33:59.032393954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.032475 containerd[1461]: time="2025-03-19T11:33:59.032448037Z" level=info msg="RemovePodSandbox \"43900db2bf6ecb2f34e1a24c2af6cdd3b142347dfc860f21ba7ad2fc8a5196a5\" returns successfully" Mar 19 11:33:59.032789 containerd[1461]: time="2025-03-19T11:33:59.032764776Z" level=info msg="StopPodSandbox for \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\"" Mar 19 11:33:59.032876 containerd[1461]: time="2025-03-19T11:33:59.032856461Z" level=info msg="TearDown network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\" successfully" Mar 19 11:33:59.032876 containerd[1461]: time="2025-03-19T11:33:59.032873782Z" level=info msg="StopPodSandbox for \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\" returns successfully" Mar 19 11:33:59.033206 containerd[1461]: time="2025-03-19T11:33:59.033171840Z" level=info msg="RemovePodSandbox for \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\"" Mar 19 11:33:59.033247 containerd[1461]: time="2025-03-19T11:33:59.033211722Z" level=info msg="Forcibly stopping sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\"" Mar 19 11:33:59.033294 containerd[1461]: time="2025-03-19T11:33:59.033279086Z" level=info msg="TearDown network for sandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\" successfully" Mar 19 11:33:59.035600 containerd[1461]: time="2025-03-19T11:33:59.035560142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.035635 containerd[1461]: time="2025-03-19T11:33:59.035619585Z" level=info msg="RemovePodSandbox \"5e09155cafc956dde4348b8d64bf712f2c07f631396a0c614b39062aee4dedda\" returns successfully" Mar 19 11:33:59.036091 containerd[1461]: time="2025-03-19T11:33:59.036052251Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:59.036169 containerd[1461]: time="2025-03-19T11:33:59.036148657Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:59.036169 containerd[1461]: time="2025-03-19T11:33:59.036165978Z" level=info msg="StopPodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:59.036615 containerd[1461]: time="2025-03-19T11:33:59.036586523Z" level=info msg="RemovePodSandbox for \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:59.036615 containerd[1461]: time="2025-03-19T11:33:59.036615724Z" level=info msg="Forcibly stopping sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\"" Mar 19 11:33:59.036737 containerd[1461]: time="2025-03-19T11:33:59.036682048Z" level=info msg="TearDown network for sandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" successfully" Mar 19 11:33:59.039083 containerd[1461]: time="2025-03-19T11:33:59.039045469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.039120 containerd[1461]: time="2025-03-19T11:33:59.039105832Z" level=info msg="RemovePodSandbox \"3ea3f2e6faf77197cd37f92a2cd1c97ae6229f01d05cba176034149b3830822d\" returns successfully" Mar 19 11:33:59.039477 containerd[1461]: time="2025-03-19T11:33:59.039452533Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:59.039563 containerd[1461]: time="2025-03-19T11:33:59.039545299Z" level=info msg="TearDown network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" successfully" Mar 19 11:33:59.039563 containerd[1461]: time="2025-03-19T11:33:59.039562060Z" level=info msg="StopPodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" returns successfully" Mar 19 11:33:59.039825 containerd[1461]: time="2025-03-19T11:33:59.039804634Z" level=info msg="RemovePodSandbox for \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:59.039900 containerd[1461]: time="2025-03-19T11:33:59.039828195Z" level=info msg="Forcibly stopping sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\"" Mar 19 11:33:59.039900 containerd[1461]: time="2025-03-19T11:33:59.039890319Z" level=info msg="TearDown network for sandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" successfully" Mar 19 11:33:59.042341 containerd[1461]: time="2025-03-19T11:33:59.042302182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.042390 containerd[1461]: time="2025-03-19T11:33:59.042350905Z" level=info msg="RemovePodSandbox \"2937926c55e84c401fc821b2e0e3ae9e8e31ac70c242102f446ec508b2fa0db3\" returns successfully" Mar 19 11:33:59.042669 containerd[1461]: time="2025-03-19T11:33:59.042628282Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" Mar 19 11:33:59.042759 containerd[1461]: time="2025-03-19T11:33:59.042739848Z" level=info msg="TearDown network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" successfully" Mar 19 11:33:59.042759 containerd[1461]: time="2025-03-19T11:33:59.042751809Z" level=info msg="StopPodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" returns successfully" Mar 19 11:33:59.043127 containerd[1461]: time="2025-03-19T11:33:59.043098830Z" level=info msg="RemovePodSandbox for \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" Mar 19 11:33:59.043177 containerd[1461]: time="2025-03-19T11:33:59.043143792Z" level=info msg="Forcibly stopping sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\"" Mar 19 11:33:59.043228 containerd[1461]: time="2025-03-19T11:33:59.043211796Z" level=info msg="TearDown network for sandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" successfully" Mar 19 11:33:59.045574 containerd[1461]: time="2025-03-19T11:33:59.045536175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.045623 containerd[1461]: time="2025-03-19T11:33:59.045591578Z" level=info msg="RemovePodSandbox \"48948cb0c317d084f4cc19a67ac063bc939f46f6a20c763ae61b6e330168de3b\" returns successfully" Mar 19 11:33:59.045919 containerd[1461]: time="2025-03-19T11:33:59.045879435Z" level=info msg="StopPodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\"" Mar 19 11:33:59.049819 containerd[1461]: time="2025-03-19T11:33:59.049782907Z" level=info msg="TearDown network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" successfully" Mar 19 11:33:59.049819 containerd[1461]: time="2025-03-19T11:33:59.049813549Z" level=info msg="StopPodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" returns successfully" Mar 19 11:33:59.050135 containerd[1461]: time="2025-03-19T11:33:59.050113527Z" level=info msg="RemovePodSandbox for \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\"" Mar 19 11:33:59.050135 containerd[1461]: time="2025-03-19T11:33:59.050138448Z" level=info msg="Forcibly stopping sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\"" Mar 19 11:33:59.050221 containerd[1461]: time="2025-03-19T11:33:59.050197332Z" level=info msg="TearDown network for sandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" successfully" Mar 19 11:33:59.052682 containerd[1461]: time="2025-03-19T11:33:59.052642197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.052746 containerd[1461]: time="2025-03-19T11:33:59.052710881Z" level=info msg="RemovePodSandbox \"ce035b496539d5aba318af28b4783015c4358e058bb58229c9ab5e3f8757f10f\" returns successfully" Mar 19 11:33:59.053090 containerd[1461]: time="2025-03-19T11:33:59.053017819Z" level=info msg="StopPodSandbox for \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\"" Mar 19 11:33:59.053166 containerd[1461]: time="2025-03-19T11:33:59.053148787Z" level=info msg="TearDown network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\" successfully" Mar 19 11:33:59.053166 containerd[1461]: time="2025-03-19T11:33:59.053162908Z" level=info msg="StopPodSandbox for \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\" returns successfully" Mar 19 11:33:59.053435 containerd[1461]: time="2025-03-19T11:33:59.053414003Z" level=info msg="RemovePodSandbox for \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\"" Mar 19 11:33:59.053435 containerd[1461]: time="2025-03-19T11:33:59.053438084Z" level=info msg="Forcibly stopping sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\"" Mar 19 11:33:59.053524 containerd[1461]: time="2025-03-19T11:33:59.053498648Z" level=info msg="TearDown network for sandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\" successfully" Mar 19 11:33:59.055961 containerd[1461]: time="2025-03-19T11:33:59.055922472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.058032 containerd[1461]: time="2025-03-19T11:33:59.056021478Z" level=info msg="RemovePodSandbox \"bb68e2530ee066496d2ab14d83018864811bcb3b2eaf3c951c0262469caca4c9\" returns successfully" Mar 19 11:33:59.058613 containerd[1461]: time="2025-03-19T11:33:59.058586390Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:59.058708 containerd[1461]: time="2025-03-19T11:33:59.058683236Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:59.058740 containerd[1461]: time="2025-03-19T11:33:59.058710197Z" level=info msg="StopPodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:59.059308 containerd[1461]: time="2025-03-19T11:33:59.059046537Z" level=info msg="RemovePodSandbox for \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:59.059308 containerd[1461]: time="2025-03-19T11:33:59.059073819Z" level=info msg="Forcibly stopping sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\"" Mar 19 11:33:59.059308 containerd[1461]: time="2025-03-19T11:33:59.059133143Z" level=info msg="TearDown network for sandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" successfully" Mar 19 11:33:59.061655 containerd[1461]: time="2025-03-19T11:33:59.061621490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.061737 containerd[1461]: time="2025-03-19T11:33:59.061675854Z" level=info msg="RemovePodSandbox \"1355f94a6106ad5a6cc075b6c685f6ad9c13d8ef5b55ea3e6d7ad317a24122e3\" returns successfully" Mar 19 11:33:59.062202 containerd[1461]: time="2025-03-19T11:33:59.062040835Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:59.062202 containerd[1461]: time="2025-03-19T11:33:59.062130841Z" level=info msg="TearDown network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" successfully" Mar 19 11:33:59.062202 containerd[1461]: time="2025-03-19T11:33:59.062141561Z" level=info msg="StopPodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" returns successfully" Mar 19 11:33:59.062668 containerd[1461]: time="2025-03-19T11:33:59.062482862Z" level=info msg="RemovePodSandbox for \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:59.062668 containerd[1461]: time="2025-03-19T11:33:59.062549466Z" level=info msg="Forcibly stopping sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\"" Mar 19 11:33:59.062668 containerd[1461]: time="2025-03-19T11:33:59.062625310Z" level=info msg="TearDown network for sandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" successfully" Mar 19 11:33:59.065358 containerd[1461]: time="2025-03-19T11:33:59.065284788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.065358 containerd[1461]: time="2025-03-19T11:33:59.065338071Z" level=info msg="RemovePodSandbox \"78bdc8bb98cd3232f4e6a8deb679a3e5bc2265c87009dcd20bf3b33aa7b2a4d0\" returns successfully" Mar 19 11:33:59.065812 containerd[1461]: time="2025-03-19T11:33:59.065773017Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" Mar 19 11:33:59.065877 containerd[1461]: time="2025-03-19T11:33:59.065860742Z" level=info msg="TearDown network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" successfully" Mar 19 11:33:59.065877 containerd[1461]: time="2025-03-19T11:33:59.065874183Z" level=info msg="StopPodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" returns successfully" Mar 19 11:33:59.066216 containerd[1461]: time="2025-03-19T11:33:59.066165601Z" level=info msg="RemovePodSandbox for \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" Mar 19 11:33:59.066265 containerd[1461]: time="2025-03-19T11:33:59.066197402Z" level=info msg="Forcibly stopping sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\"" Mar 19 11:33:59.066353 containerd[1461]: time="2025-03-19T11:33:59.066331050Z" level=info msg="TearDown network for sandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" successfully" Mar 19 11:33:59.068683 containerd[1461]: time="2025-03-19T11:33:59.068647388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.068763 containerd[1461]: time="2025-03-19T11:33:59.068709192Z" level=info msg="RemovePodSandbox \"8b1616784a188f9533d129c4b72787152fce1d2a8b24db86d12bd325d5be5009\" returns successfully" Mar 19 11:33:59.069245 containerd[1461]: time="2025-03-19T11:33:59.069065813Z" level=info msg="StopPodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\"" Mar 19 11:33:59.069245 containerd[1461]: time="2025-03-19T11:33:59.069166899Z" level=info msg="TearDown network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" successfully" Mar 19 11:33:59.069245 containerd[1461]: time="2025-03-19T11:33:59.069177060Z" level=info msg="StopPodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" returns successfully" Mar 19 11:33:59.069495 containerd[1461]: time="2025-03-19T11:33:59.069446956Z" level=info msg="RemovePodSandbox for \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\"" Mar 19 11:33:59.069495 containerd[1461]: time="2025-03-19T11:33:59.069468957Z" level=info msg="Forcibly stopping sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\"" Mar 19 11:33:59.069558 containerd[1461]: time="2025-03-19T11:33:59.069536481Z" level=info msg="TearDown network for sandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" successfully" Mar 19 11:33:59.071956 containerd[1461]: time="2025-03-19T11:33:59.071926903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.072130 containerd[1461]: time="2025-03-19T11:33:59.071976426Z" level=info msg="RemovePodSandbox \"255382c25c8c394ba5f75908c83913372214f03b899f37b701ac1ca962229080\" returns successfully" Mar 19 11:33:59.072316 containerd[1461]: time="2025-03-19T11:33:59.072289925Z" level=info msg="StopPodSandbox for \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\"" Mar 19 11:33:59.072519 containerd[1461]: time="2025-03-19T11:33:59.072441854Z" level=info msg="TearDown network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\" successfully" Mar 19 11:33:59.072519 containerd[1461]: time="2025-03-19T11:33:59.072457934Z" level=info msg="StopPodSandbox for \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\" returns successfully" Mar 19 11:33:59.072921 containerd[1461]: time="2025-03-19T11:33:59.072893440Z" level=info msg="RemovePodSandbox for \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\"" Mar 19 11:33:59.072973 containerd[1461]: time="2025-03-19T11:33:59.072927002Z" level=info msg="Forcibly stopping sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\"" Mar 19 11:33:59.073014 containerd[1461]: time="2025-03-19T11:33:59.073005007Z" level=info msg="TearDown network for sandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\" successfully" Mar 19 11:33:59.075399 containerd[1461]: time="2025-03-19T11:33:59.075362587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.075459 containerd[1461]: time="2025-03-19T11:33:59.075419110Z" level=info msg="RemovePodSandbox \"7cb44706cef6a24141bd46329d7fc588748e8f9534db4620561d9a6874ae9377\" returns successfully" Mar 19 11:33:59.075857 containerd[1461]: time="2025-03-19T11:33:59.075812654Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:59.075917 containerd[1461]: time="2025-03-19T11:33:59.075905619Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:59.075939 containerd[1461]: time="2025-03-19T11:33:59.075917020Z" level=info msg="StopPodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:59.076676 containerd[1461]: time="2025-03-19T11:33:59.076220598Z" level=info msg="RemovePodSandbox for \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:59.076676 containerd[1461]: time="2025-03-19T11:33:59.076249400Z" level=info msg="Forcibly stopping sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\"" Mar 19 11:33:59.076676 containerd[1461]: time="2025-03-19T11:33:59.076317324Z" level=info msg="TearDown network for sandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" successfully" Mar 19 11:33:59.078644 containerd[1461]: time="2025-03-19T11:33:59.078613980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.078722 containerd[1461]: time="2025-03-19T11:33:59.078665663Z" level=info msg="RemovePodSandbox \"710651ec35d78ade1ad944b1b369cbd4861dff50011190867fe622ec48780127\" returns successfully" Mar 19 11:33:59.079335 containerd[1461]: time="2025-03-19T11:33:59.079083208Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:59.079335 containerd[1461]: time="2025-03-19T11:33:59.079172934Z" level=info msg="TearDown network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" successfully" Mar 19 11:33:59.079335 containerd[1461]: time="2025-03-19T11:33:59.079182334Z" level=info msg="StopPodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" returns successfully" Mar 19 11:33:59.079674 containerd[1461]: time="2025-03-19T11:33:59.079651602Z" level=info msg="RemovePodSandbox for \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:59.079730 containerd[1461]: time="2025-03-19T11:33:59.079679524Z" level=info msg="Forcibly stopping sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\"" Mar 19 11:33:59.079768 containerd[1461]: time="2025-03-19T11:33:59.079753288Z" level=info msg="TearDown network for sandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" successfully" Mar 19 11:33:59.081988 containerd[1461]: time="2025-03-19T11:33:59.081942538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.082037 containerd[1461]: time="2025-03-19T11:33:59.081996501Z" level=info msg="RemovePodSandbox \"e2951cf6a6c7e506fb53b4fef1fff6980acb8e4844ee5e8ddb2b6cca8a7c7a63\" returns successfully" Mar 19 11:33:59.082376 containerd[1461]: time="2025-03-19T11:33:59.082340922Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" Mar 19 11:33:59.082450 containerd[1461]: time="2025-03-19T11:33:59.082429727Z" level=info msg="TearDown network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" successfully" Mar 19 11:33:59.082450 containerd[1461]: time="2025-03-19T11:33:59.082444848Z" level=info msg="StopPodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" returns successfully" Mar 19 11:33:59.082815 containerd[1461]: time="2025-03-19T11:33:59.082783668Z" level=info msg="RemovePodSandbox for \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" Mar 19 11:33:59.082815 containerd[1461]: time="2025-03-19T11:33:59.082815990Z" level=info msg="Forcibly stopping sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\"" Mar 19 11:33:59.082897 containerd[1461]: time="2025-03-19T11:33:59.082877874Z" level=info msg="TearDown network for sandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" successfully" Mar 19 11:33:59.085178 containerd[1461]: time="2025-03-19T11:33:59.085148729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.085241 containerd[1461]: time="2025-03-19T11:33:59.085200972Z" level=info msg="RemovePodSandbox \"0b3d90755ca3213bf0351b0aa84cbb69b8b2cd086840471ff0a3e2628549d39a\" returns successfully" Mar 19 11:33:59.085529 containerd[1461]: time="2025-03-19T11:33:59.085508030Z" level=info msg="StopPodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\"" Mar 19 11:33:59.085606 containerd[1461]: time="2025-03-19T11:33:59.085591315Z" level=info msg="TearDown network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" successfully" Mar 19 11:33:59.085633 containerd[1461]: time="2025-03-19T11:33:59.085605236Z" level=info msg="StopPodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" returns successfully" Mar 19 11:33:59.085955 containerd[1461]: time="2025-03-19T11:33:59.085878292Z" level=info msg="RemovePodSandbox for \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\"" Mar 19 11:33:59.085997 containerd[1461]: time="2025-03-19T11:33:59.085961297Z" level=info msg="Forcibly stopping sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\"" Mar 19 11:33:59.086055 containerd[1461]: time="2025-03-19T11:33:59.086037742Z" level=info msg="TearDown network for sandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" successfully" Mar 19 11:33:59.088506 containerd[1461]: time="2025-03-19T11:33:59.088458125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.088553 containerd[1461]: time="2025-03-19T11:33:59.088519489Z" level=info msg="RemovePodSandbox \"6556d69dd3f60e5a01441886f4103d5e94efdc5adf8998f80409e259e2531c62\" returns successfully" Mar 19 11:33:59.088863 containerd[1461]: time="2025-03-19T11:33:59.088841948Z" level=info msg="StopPodSandbox for \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\"" Mar 19 11:33:59.088940 containerd[1461]: time="2025-03-19T11:33:59.088922273Z" level=info msg="TearDown network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\" successfully" Mar 19 11:33:59.088994 containerd[1461]: time="2025-03-19T11:33:59.088937154Z" level=info msg="StopPodSandbox for \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\" returns successfully" Mar 19 11:33:59.089233 containerd[1461]: time="2025-03-19T11:33:59.089165247Z" level=info msg="RemovePodSandbox for \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\"" Mar 19 11:33:59.089233 containerd[1461]: time="2025-03-19T11:33:59.089193889Z" level=info msg="Forcibly stopping sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\"" Mar 19 11:33:59.089298 containerd[1461]: time="2025-03-19T11:33:59.089246852Z" level=info msg="TearDown network for sandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\" successfully" Mar 19 11:33:59.091438 containerd[1461]: time="2025-03-19T11:33:59.091400620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:33:59.091488 containerd[1461]: time="2025-03-19T11:33:59.091455944Z" level=info msg="RemovePodSandbox \"538e042fab469eee478e5d46e50ff525196384976a4279b55d656548f7967d90\" returns successfully" Mar 19 11:34:02.171400 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:56626.service - OpenSSH per-connection server daemon (10.0.0.1:56626). Mar 19 11:34:02.215147 sshd[6182]: Accepted publickey for core from 10.0.0.1 port 56626 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:34:02.216353 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:34:02.220424 systemd-logind[1447]: New session 20 of user core. Mar 19 11:34:02.234873 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:34:02.368076 sshd[6184]: Connection closed by 10.0.0.1 port 56626 Mar 19 11:34:02.368556 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Mar 19 11:34:02.371948 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:56626.service: Deactivated successfully. Mar 19 11:34:02.374051 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:34:02.375504 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:34:02.376398 systemd-logind[1447]: Removed session 20. Mar 19 11:34:07.380181 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:57836.service - OpenSSH per-connection server daemon (10.0.0.1:57836). Mar 19 11:34:07.425377 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 57836 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:34:07.426531 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:34:07.430670 systemd-logind[1447]: New session 21 of user core. Mar 19 11:34:07.438990 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:34:07.581853 sshd[6201]: Connection closed by 10.0.0.1 port 57836 Mar 19 11:34:07.582223 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Mar 19 11:34:07.585124 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:57836.service: Deactivated successfully. Mar 19 11:34:07.587115 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:34:07.588575 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:34:07.589600 systemd-logind[1447]: Removed session 21. Mar 19 11:34:08.094462 kubelet[2648]: E0319 11:34:08.094415 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:34:12.597193 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:46592.service - OpenSSH per-connection server daemon (10.0.0.1:46592). Mar 19 11:34:12.642967 sshd[6243]: Accepted publickey for core from 10.0.0.1 port 46592 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:34:12.644320 sshd-session[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:34:12.648362 systemd-logind[1447]: New session 22 of user core. Mar 19 11:34:12.658837 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:34:12.793560 sshd[6245]: Connection closed by 10.0.0.1 port 46592 Mar 19 11:34:12.794125 sshd-session[6243]: pam_unix(sshd:session): session closed for user core Mar 19 11:34:12.797340 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:46592.service: Deactivated successfully. Mar 19 11:34:12.799018 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:34:12.800182 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:34:12.801213 systemd-logind[1447]: Removed session 22. Mar 19 11:34:17.809143 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Mar 19 11:34:17.849663 sshd[6260]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:34:17.851196 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:34:17.857437 systemd-logind[1447]: New session 23 of user core. Mar 19 11:34:17.864035 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:34:17.989738 sshd[6262]: Connection closed by 10.0.0.1 port 46602 Mar 19 11:34:17.990571 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Mar 19 11:34:17.993874 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:46602.service: Deactivated successfully. Mar 19 11:34:17.997516 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:34:17.998354 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:34:17.999149 systemd-logind[1447]: Removed session 23.