Jan 29 11:00:15.921505 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:00:15.921527 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 11:00:15.921537 kernel: KASLR enabled Jan 29 11:00:15.921543 kernel: efi: EFI v2.7 by EDK II Jan 29 11:00:15.921549 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 29 11:00:15.921555 kernel: random: crng init done Jan 29 11:00:15.921562 kernel: secureboot: Secure boot disabled Jan 29 11:00:15.921567 kernel: ACPI: Early table checksum verification disabled Jan 29 11:00:15.921573 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:00:15.921580 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:00:15.921587 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921592 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921598 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921604 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921612 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921619 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921626 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921632 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921638 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:00:15.921644 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:00:15.921651 kernel: NUMA: Failed to initialise from firmware Jan 29 11:00:15.921657 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:00:15.921663 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:00:15.921669 kernel: Zone ranges: Jan 29 11:00:15.921675 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:00:15.921683 kernel: DMA32 empty Jan 29 11:00:15.921689 kernel: Normal empty Jan 29 11:00:15.921695 kernel: Movable zone start for each node Jan 29 11:00:15.921701 kernel: Early memory node ranges Jan 29 11:00:15.921707 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 29 11:00:15.921713 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 29 11:00:15.921720 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 29 11:00:15.921726 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:00:15.921732 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:00:15.921738 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:00:15.921744 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:00:15.921750 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:00:15.921757 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:00:15.921763 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:00:15.921770 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:00:15.921779 kernel: psci: probing for conduit method from ACPI. Jan 29 11:00:15.921785 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:00:15.921792 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:00:15.921800 kernel: psci: Trusted OS migration not required Jan 29 11:00:15.921806 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:00:15.921813 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:00:15.921820 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:00:15.921826 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:00:15.921833 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:00:15.921839 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:00:15.921846 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:00:15.921852 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:00:15.921859 kernel: CPU features: detected: Spectre-v4 Jan 29 11:00:15.921866 kernel: CPU features: detected: Spectre-BHB Jan 29 11:00:15.921873 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:00:15.921880 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:00:15.921886 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:00:15.921892 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:00:15.921899 kernel: alternatives: applying boot alternatives Jan 29 11:00:15.921906 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 11:00:15.921913 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:00:15.921920 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:00:15.921926 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:00:15.921933 kernel: Fallback order for Node 0: 0 Jan 29 11:00:15.921941 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:00:15.921947 kernel: Policy zone: DMA Jan 29 11:00:15.921954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:00:15.921960 kernel: software IO TLB: area num 4. Jan 29 11:00:15.921967 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:00:15.921975 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 29 11:00:15.921986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:00:15.921994 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:00:15.922001 kernel: rcu: RCU event tracing is enabled. Jan 29 11:00:15.922008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:00:15.922015 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:00:15.922022 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:00:15.922030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:00:15.922037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:00:15.922043 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:00:15.922050 kernel: GICv3: 256 SPIs implemented Jan 29 11:00:15.922056 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:00:15.922063 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:00:15.922069 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:00:15.922086 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:00:15.922093 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:00:15.922100 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:00:15.922106 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:00:15.922115 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:00:15.922122 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:00:15.922129 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:00:15.922135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:00:15.922142 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:00:15.922149 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:00:15.922155 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:00:15.922162 kernel: arm-pv: using stolen time PV Jan 29 11:00:15.922169 kernel: Console: colour dummy device 80x25 Jan 29 11:00:15.922176 kernel: ACPI: Core revision 20230628 Jan 29 11:00:15.922183 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:00:15.922191 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:00:15.922198 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:00:15.922205 kernel: landlock: Up and running. Jan 29 11:00:15.922211 kernel: SELinux: Initializing. Jan 29 11:00:15.922218 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:00:15.922225 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:00:15.922232 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:00:15.922239 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:00:15.922246 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:00:15.922254 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:00:15.922261 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:00:15.922268 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:00:15.922274 kernel: Remapping and enabling EFI services. Jan 29 11:00:15.922281 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:00:15.922288 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:00:15.922295 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:00:15.922302 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:00:15.922309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:00:15.922317 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:00:15.922324 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:00:15.922336 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:00:15.922350 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:00:15.922357 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:00:15.922364 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:00:15.922371 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:00:15.922378 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:00:15.922386 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:00:15.922395 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:00:15.922402 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:00:15.922409 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:00:15.922416 kernel: SMP: Total of 4 processors activated. Jan 29 11:00:15.922423 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:00:15.922430 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:00:15.922438 kernel: CPU features: detected: Common not Private translations Jan 29 11:00:15.922445 kernel: CPU features: detected: CRC32 instructions Jan 29 11:00:15.922453 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:00:15.922461 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:00:15.922468 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:00:15.922475 kernel: CPU features: detected: Privileged Access Never Jan 29 11:00:15.922482 kernel: CPU features: detected: RAS Extension Support Jan 29 11:00:15.922489 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:00:15.922496 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:00:15.922503 kernel: alternatives: applying system-wide alternatives Jan 29 11:00:15.922510 kernel: devtmpfs: initialized Jan 29 11:00:15.922519 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:00:15.922526 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:00:15.922533 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:00:15.922540 kernel: SMBIOS 3.0.0 present. Jan 29 11:00:15.922547 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:00:15.922554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:00:15.922561 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:00:15.922568 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:00:15.922576 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:00:15.922584 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:00:15.922591 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 29 11:00:15.922598 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:00:15.922606 kernel: cpuidle: using governor menu Jan 29 11:00:15.922613 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:00:15.922620 kernel: ASID allocator initialised with 32768 entries Jan 29 11:00:15.922627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:00:15.922634 kernel: Serial: AMBA PL011 UART driver Jan 29 11:00:15.922641 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:00:15.922650 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:00:15.922657 kernel: Modules: 508880 pages in range for PLT usage Jan 29 11:00:15.922664 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:00:15.922671 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:00:15.922678 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:00:15.922685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:00:15.922692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:00:15.922699 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:00:15.922707 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:00:15.922715 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:00:15.922722 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:00:15.922729 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:00:15.922736 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:00:15.922743 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:00:15.922750 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:00:15.922757 kernel: ACPI: Interpreter enabled Jan 29 11:00:15.922764 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:00:15.922771 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:00:15.922778 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:00:15.922787 kernel: printk: console [ttyAMA0] enabled Jan 29 11:00:15.922794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:00:15.922929 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:00:15.923001 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:00:15.923065 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:00:15.923148 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:00:15.923247 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:00:15.923259 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:00:15.923267 kernel: PCI host bridge to bus 0000:00 Jan 29 11:00:15.923338 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:00:15.923417 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:00:15.923477 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:00:15.923535 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:00:15.923614 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:00:15.923700 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:00:15.923768 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:00:15.923835 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:00:15.923899 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:00:15.923965 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:00:15.924029 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:00:15.924112 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:00:15.924174 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:00:15.924232 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:00:15.924292 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:00:15.924302 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:00:15.924309 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:00:15.924316 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:00:15.924323 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:00:15.924333 kernel: iommu: Default domain type: Translated Jan 29 11:00:15.924340 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:00:15.924354 kernel: efivars: Registered efivars operations Jan 29 11:00:15.924361 kernel: vgaarb: loaded Jan 29 11:00:15.924369 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:00:15.924376 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:00:15.924383 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:00:15.924390 kernel: pnp: PnP ACPI init Jan 29 11:00:15.924473 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:00:15.924486 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:00:15.924493 kernel: NET: Registered PF_INET protocol family Jan 29 11:00:15.924500 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:00:15.924507 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:00:15.924514 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:00:15.924522 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:00:15.924529 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:00:15.924536 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:00:15.924550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:00:15.924558 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:00:15.924565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:00:15.924572 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:00:15.924579 kernel: kvm [1]: HYP mode not available Jan 29 11:00:15.924586 kernel: Initialise system trusted keyrings Jan 29 11:00:15.924593 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:00:15.924600 kernel: Key type asymmetric registered Jan 29 11:00:15.924607 kernel: Asymmetric key parser 'x509' registered Jan 29 11:00:15.924616 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:00:15.924623 kernel: io scheduler mq-deadline registered Jan 29 11:00:15.924630 kernel: io scheduler kyber registered Jan 29 11:00:15.924637 kernel: io scheduler bfq registered Jan 29 11:00:15.924644 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:00:15.924651 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:00:15.924659 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:00:15.924735 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:00:15.924745 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:00:15.924754 kernel: thunder_xcv, ver 1.0 Jan 29 11:00:15.924761 kernel: thunder_bgx, ver 1.0 Jan 29 11:00:15.924768 kernel: nicpf, ver 1.0 Jan 29 11:00:15.924775 kernel: nicvf, ver 1.0 Jan 29 11:00:15.924849 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:00:15.924915 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:00:15 UTC (1738148415) Jan 29 11:00:15.924925 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:00:15.924932 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:00:15.924941 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:00:15.924949 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:00:15.924956 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:00:15.924963 kernel: Segment Routing with IPv6 Jan 29 11:00:15.924970 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:00:15.924977 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:00:15.924984 kernel: Key type dns_resolver registered Jan 29 11:00:15.924991 kernel: registered taskstats version 1 Jan 29 11:00:15.924998 kernel: Loading compiled-in X.509 certificates Jan 29 11:00:15.925005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 11:00:15.925014 kernel: Key type .fscrypt registered Jan 29 11:00:15.925021 kernel: Key type fscrypt-provisioning registered Jan 29 11:00:15.925028 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:00:15.925035 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:00:15.925043 kernel: ima: No architecture policies found Jan 29 11:00:15.925050 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:00:15.925057 kernel: clk: Disabling unused clocks Jan 29 11:00:15.925064 kernel: Freeing unused kernel memory: 39936K Jan 29 11:00:15.925072 kernel: Run /init as init process Jan 29 11:00:15.925090 kernel: with arguments: Jan 29 11:00:15.925097 kernel: /init Jan 29 11:00:15.925104 kernel: with environment: Jan 29 11:00:15.925111 kernel: HOME=/ Jan 29 11:00:15.925118 kernel: TERM=linux Jan 29 11:00:15.925124 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:00:15.925133 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:00:15.925145 systemd[1]: Detected virtualization kvm. Jan 29 11:00:15.925153 systemd[1]: Detected architecture arm64. Jan 29 11:00:15.925160 systemd[1]: Running in initrd. Jan 29 11:00:15.925168 systemd[1]: No hostname configured, using default hostname. Jan 29 11:00:15.925175 systemd[1]: Hostname set to . Jan 29 11:00:15.925183 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:00:15.925191 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:00:15.925198 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:00:15.925208 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:00:15.925217 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:00:15.925224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:00:15.925232 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:00:15.925240 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:00:15.925249 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:00:15.925257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:00:15.925267 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:00:15.925274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:00:15.925282 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:00:15.925290 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:00:15.925297 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:00:15.925305 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:00:15.925313 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:00:15.925321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:00:15.925328 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:00:15.925338 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:00:15.925352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:00:15.925360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:00:15.925368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:00:15.925376 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:00:15.925384 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:00:15.925391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:00:15.925399 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:00:15.925408 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:00:15.925416 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:00:15.925424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:00:15.925432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:00:15.925443 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:00:15.925453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:00:15.925463 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:00:15.925494 systemd-journald[240]: Collecting audit messages is disabled. Jan 29 11:00:15.925513 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:00:15.925523 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:00:15.925531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:00:15.925540 systemd-journald[240]: Journal started Jan 29 11:00:15.925563 systemd-journald[240]: Runtime Journal (/run/log/journal/e0e18c734f09435f91c4c86f18f68a2e) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:00:15.905409 systemd-modules-load[241]: Inserted module 'overlay' Jan 29 11:00:15.928710 kernel: Bridge firewalling registered Jan 29 11:00:15.928730 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:00:15.927857 systemd-modules-load[241]: Inserted module 'br_netfilter' Jan 29 11:00:15.929894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:00:15.932471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:00:15.936134 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:00:15.937926 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:00:15.940255 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:00:15.943651 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:00:15.952753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:00:15.954328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:00:15.956635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:00:15.960063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:00:15.970232 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:00:15.972602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:00:15.981045 dracut-cmdline[276]: dracut-dracut-053 Jan 29 11:00:15.983915 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 11:00:16.000309 systemd-resolved[278]: Positive Trust Anchors: Jan 29 11:00:16.000323 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:00:16.000361 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:00:16.005319 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 29 11:00:16.008927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:00:16.010384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:00:16.056106 kernel: SCSI subsystem initialized Jan 29 11:00:16.061095 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:00:16.068099 kernel: iscsi: registered transport (tcp) Jan 29 11:00:16.083119 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:00:16.083151 kernel: QLogic iSCSI HBA Driver Jan 29 11:00:16.124121 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:00:16.141216 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:00:16.157100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:00:16.157150 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:00:16.158762 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:00:16.220109 kernel: raid6: neonx8 gen() 15782 MB/s Jan 29 11:00:16.237104 kernel: raid6: neonx4 gen() 15622 MB/s Jan 29 11:00:16.254110 kernel: raid6: neonx2 gen() 13104 MB/s Jan 29 11:00:16.271110 kernel: raid6: neonx1 gen() 10382 MB/s Jan 29 11:00:16.288110 kernel: raid6: int64x8 gen() 6750 MB/s Jan 29 11:00:16.305107 kernel: raid6: int64x4 gen() 7255 MB/s Jan 29 11:00:16.322168 kernel: raid6: int64x2 gen() 6086 MB/s Jan 29 11:00:16.339232 kernel: raid6: int64x1 gen() 4929 MB/s Jan 29 11:00:16.339253 kernel: raid6: using algorithm neonx8 gen() 15782 MB/s Jan 29 11:00:16.357176 kernel: raid6: .... xor() 11942 MB/s, rmw enabled Jan 29 11:00:16.357190 kernel: raid6: using neon recovery algorithm Jan 29 11:00:16.362528 kernel: xor: measuring software checksum speed Jan 29 11:00:16.362545 kernel: 8regs : 21618 MB/sec Jan 29 11:00:16.363192 kernel: 32regs : 21704 MB/sec Jan 29 11:00:16.364437 kernel: arm64_neon : 27672 MB/sec Jan 29 11:00:16.364460 kernel: xor: using function: arm64_neon (27672 MB/sec) Jan 29 11:00:16.415100 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:00:16.425647 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:00:16.437283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:00:16.450009 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 11:00:16.453099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:00:16.456642 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:00:16.470324 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 29 11:00:16.496021 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:00:16.516352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:00:16.555676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:00:16.564312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:00:16.576947 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:00:16.578417 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:00:16.580353 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:00:16.581385 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:00:16.590346 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:00:16.600167 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:00:16.604823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:00:16.604952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:00:16.610451 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:00:16.614337 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:00:16.614439 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:00:16.614450 kernel: GPT:9289727 != 19775487 Jan 29 11:00:16.614465 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:00:16.614474 kernel: GPT:9289727 != 19775487 Jan 29 11:00:16.614484 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:00:16.614493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:00:16.608251 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:00:16.613624 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:00:16.613768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:00:16.617958 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:00:16.630114 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (507) Jan 29 11:00:16.632092 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (513) Jan 29 11:00:16.633854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:00:16.645753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:00:16.653440 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:00:16.657737 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:00:16.662094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:00:16.665877 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:00:16.667025 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:00:16.680233 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:00:16.681914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:00:16.687868 disk-uuid[550]: Primary Header is updated. Jan 29 11:00:16.687868 disk-uuid[550]: Secondary Entries is updated. Jan 29 11:00:16.687868 disk-uuid[550]: Secondary Header is updated. Jan 29 11:00:16.691099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:00:16.705949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:00:17.704098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:00:17.705093 disk-uuid[551]: The operation has completed successfully. Jan 29 11:00:17.730502 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:00:17.730602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:00:17.752239 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:00:17.755225 sh[571]: Success Jan 29 11:00:17.771387 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:00:17.817541 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:00:17.819374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:00:17.820509 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:00:17.832176 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 11:00:17.832214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:00:17.832227 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:00:17.834695 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:00:17.834719 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:00:17.837852 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:00:17.839239 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:00:17.843223 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:00:17.844784 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:00:17.853658 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:00:17.853699 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:00:17.853710 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:00:17.857199 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:00:17.864008 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:00:17.866277 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:00:17.870990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:00:17.878245 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:00:17.942650 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:00:17.953237 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:00:17.971136 ignition[665]: Ignition 2.20.0 Jan 29 11:00:17.971147 ignition[665]: Stage: fetch-offline Jan 29 11:00:17.971182 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:00:17.971190 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:00:17.971345 ignition[665]: parsed url from cmdline: "" Jan 29 11:00:17.974162 systemd-networkd[767]: lo: Link UP Jan 29 11:00:17.971349 ignition[665]: no config URL provided Jan 29 11:00:17.974166 systemd-networkd[767]: lo: Gained carrier Jan 29 11:00:17.971354 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:00:17.974973 systemd-networkd[767]: Enumeration completed Jan 29 11:00:17.971368 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:00:17.975171 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:00:17.971392 ignition[665]: op(1): [started] loading QEMU firmware config module Jan 29 11:00:17.975405 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:00:17.971396 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:00:17.975408 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:00:17.977973 ignition[665]: op(1): [finished] loading QEMU firmware config module Jan 29 11:00:17.976242 systemd-networkd[767]: eth0: Link UP Jan 29 11:00:17.977995 ignition[665]: QEMU firmware config was not found. Ignoring... Jan 29 11:00:17.976245 systemd-networkd[767]: eth0: Gained carrier Jan 29 11:00:17.976252 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:00:17.976702 systemd[1]: Reached target network.target - Network. Jan 29 11:00:17.993127 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:00:18.027875 ignition[665]: parsing config with SHA512: e84a94feb3c1d41ef6c8e8f11ba20ed6dcc4408c2cbd012bcb0be783967fe7b408f91e461810c9db6b71768aa5350790f80d4a6b322dd58a6ffc47afe1762592 Jan 29 11:00:18.033833 unknown[665]: fetched base config from "system" Jan 29 11:00:18.033843 unknown[665]: fetched user config from "qemu" Jan 29 11:00:18.034389 ignition[665]: fetch-offline: fetch-offline passed Jan 29 11:00:18.036091 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:00:18.034481 ignition[665]: Ignition finished successfully Jan 29 11:00:18.037700 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:00:18.045244 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:00:18.055648 ignition[773]: Ignition 2.20.0 Jan 29 11:00:18.055662 ignition[773]: Stage: kargs Jan 29 11:00:18.055845 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:00:18.055855 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:00:18.058445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:00:18.056741 ignition[773]: kargs: kargs passed Jan 29 11:00:18.056789 ignition[773]: Ignition finished successfully Jan 29 11:00:18.068251 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:00:18.077854 ignition[781]: Ignition 2.20.0 Jan 29 11:00:18.077866 ignition[781]: Stage: disks Jan 29 11:00:18.078037 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:00:18.078047 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:00:18.080598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:00:18.078929 ignition[781]: disks: disks passed Jan 29 11:00:18.082314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:00:18.078976 ignition[781]: Ignition finished successfully Jan 29 11:00:18.084028 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:00:18.085208 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:00:18.086653 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:00:18.088575 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:00:18.098299 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:00:18.108627 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:00:18.113496 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:00:18.119184 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:00:18.162098 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 11:00:18.162593 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:00:18.163908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:00:18.171171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:00:18.172816 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:00:18.174058 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:00:18.174110 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:00:18.174132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:00:18.184507 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Jan 29 11:00:18.184531 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:00:18.184549 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:00:18.180789 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:00:18.187969 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:00:18.183019 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:00:18.191092 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:00:18.192396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:00:18.237825 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:00:18.242527 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:00:18.248121 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:00:18.252519 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:00:18.331152 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:00:18.339166 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:00:18.341617 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:00:18.347088 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:00:18.362011 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:00:18.366384 ignition[913]: INFO : Ignition 2.20.0 Jan 29 11:00:18.367370 ignition[913]: INFO : Stage: mount Jan 29 11:00:18.367370 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:00:18.367370 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:00:18.370245 ignition[913]: INFO : mount: mount passed Jan 29 11:00:18.370245 ignition[913]: INFO : Ignition finished successfully Jan 29 11:00:18.369721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:00:18.380202 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:00:18.830595 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:00:18.839261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:00:18.846097 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Jan 29 11:00:18.846129 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:00:18.846140 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:00:18.847633 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:00:18.850095 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:00:18.850814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:00:18.867441 ignition[942]: INFO : Ignition 2.20.0 Jan 29 11:00:18.867441 ignition[942]: INFO : Stage: files Jan 29 11:00:18.869088 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:00:18.869088 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:00:18.869088 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:00:18.872348 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:00:18.872348 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:00:18.875532 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:00:18.876840 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:00:18.876840 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:00:18.876057 unknown[942]: wrote ssh authorized keys file for user: core Jan 29 11:00:18.880648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:00:18.880648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 11:00:18.925958 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:00:19.068936 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:00:19.071140 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:00:19.403119 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:00:19.634558 systemd-networkd[767]: eth0: Gained IPv6LL Jan 29 11:00:19.679206 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:00:19.679206 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:00:19.683088 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:00:19.705774 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:00:19.709426 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:00:19.711000 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:00:19.711000 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:00:19.711000 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:00:19.711000 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:00:19.711000 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:00:19.711000 ignition[942]: INFO : files: files passed Jan 29 11:00:19.711000 ignition[942]: INFO : Ignition finished successfully Jan 29 11:00:19.713790 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:00:19.727242 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:00:19.729607 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:00:19.732698 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:00:19.732804 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:00:19.737235 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:00:19.740636 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:00:19.740636 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:00:19.743875 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:00:19.744482 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:00:19.747038 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:00:19.753217 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:00:19.773500 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:00:19.773637 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:00:19.775945 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:00:19.777849 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:00:19.779742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:00:19.780571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:00:19.796072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:00:19.811249 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:00:19.819025 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:00:19.820321 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:00:19.822440 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:00:19.824224 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:00:19.824354 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:00:19.826948 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:00:19.828160 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:00:19.830275 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:00:19.832716 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:00:19.834578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:00:19.836498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:00:19.838441 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:00:19.840454 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:00:19.842278 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:00:19.844410 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:00:19.845953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:00:19.846105 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:00:19.848506 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:00:19.850348 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:00:19.852288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:00:19.853134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:00:19.854423 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:00:19.854555 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:00:19.857169 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:00:19.857292 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:00:19.859588 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:00:19.861139 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:00:19.862156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:00:19.864208 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:00:19.866050 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:00:19.867675 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:00:19.867770 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:00:19.869511 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:00:19.869593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:00:19.871814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:00:19.871926 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:00:19.873683 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:00:19.873788 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:00:19.883292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:00:19.884934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:00:19.885882 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:00:19.886010 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:00:19.888087 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:00:19.888198 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:00:19.894422 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:00:19.894515 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:00:19.898163 ignition[997]: INFO : Ignition 2.20.0 Jan 29 11:00:19.898163 ignition[997]: INFO : Stage: umount Jan 29 11:00:19.898163 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:00:19.898163 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:00:19.902463 ignition[997]: INFO : umount: umount passed Jan 29 11:00:19.902463 ignition[997]: INFO : Ignition finished successfully Jan 29 11:00:19.898981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:00:19.903346 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:00:19.903461 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:00:19.905332 systemd[1]: Stopped target network.target - Network. Jan 29 11:00:19.906827 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:00:19.906891 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:00:19.908756 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:00:19.908801 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:00:19.910655 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:00:19.910696 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:00:19.912395 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:00:19.912444 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:00:19.914520 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:00:19.916476 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:00:19.919393 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:00:19.919477 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:00:19.921303 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:00:19.921398 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:00:19.927302 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:00:19.927416 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:00:19.928911 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 29 11:00:19.930839 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:00:19.930954 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:00:19.933433 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:00:19.933482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:00:19.940234 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:00:19.941417 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:00:19.941483 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:00:19.943563 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:00:19.943609 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:00:19.945454 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:00:19.945499 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:00:19.947409 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:00:19.947457 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:00:19.949529 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:00:19.961375 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:00:19.961486 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:00:19.963452 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:00:19.963568 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:00:19.965872 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:00:19.965927 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:00:19.967135 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:00:19.967169 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:00:19.969176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:00:19.969225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:00:19.971956 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:00:19.971999 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:00:19.973939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:00:19.973984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:00:19.989221 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:00:19.990251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:00:19.990310 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:00:19.992406 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:00:19.992452 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:00:19.994428 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:00:19.994471 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:00:19.996747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:00:19.996792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:00:19.999033 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:00:20.000160 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:00:20.002451 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:00:20.004847 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:00:20.013594 systemd[1]: Switching root. Jan 29 11:00:20.043325 systemd-journald[240]: Journal stopped Jan 29 11:00:20.755946 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Jan 29 11:00:20.756039 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:00:20.756051 kernel: SELinux: policy capability open_perms=1 Jan 29 11:00:20.756065 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:00:20.756086 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:00:20.756098 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:00:20.756108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:00:20.756117 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:00:20.756131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:00:20.756141 kernel: audit: type=1403 audit(1738148420.186:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:00:20.756151 systemd[1]: Successfully loaded SELinux policy in 32.317ms. Jan 29 11:00:20.756172 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.020ms. Jan 29 11:00:20.756184 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:00:20.756195 systemd[1]: Detected virtualization kvm. Jan 29 11:00:20.756206 systemd[1]: Detected architecture arm64. Jan 29 11:00:20.756216 systemd[1]: Detected first boot. Jan 29 11:00:20.756227 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:00:20.756237 zram_generator::config[1042]: No configuration found. Jan 29 11:00:20.756249 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:00:20.756261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:00:20.756273 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:00:20.756284 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:00:20.756294 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:00:20.756305 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:00:20.756315 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:00:20.756325 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:00:20.756336 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:00:20.756350 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:00:20.756363 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:00:20.756380 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:00:20.756392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:00:20.756403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:00:20.756413 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:00:20.756424 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:00:20.756434 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:00:20.756445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:00:20.756455 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:00:20.756468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:00:20.756478 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:00:20.756489 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:00:20.756500 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:00:20.756511 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:00:20.756521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:00:20.756531 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:00:20.756541 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:00:20.756553 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:00:20.756564 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:00:20.756574 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:00:20.756586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:00:20.756597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:00:20.756607 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:00:20.756618 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:00:20.756628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:00:20.756638 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:00:20.756650 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:00:20.756660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:00:20.756670 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:00:20.756681 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:00:20.756692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:00:20.756703 systemd[1]: Reached target machines.target - Containers. Jan 29 11:00:20.756713 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:00:20.756725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:00:20.756736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:00:20.756747 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:00:20.756757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:00:20.756767 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:00:20.756778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:00:20.756788 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:00:20.756798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:00:20.756809 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:00:20.756819 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:00:20.756831 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:00:20.756842 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:00:20.756852 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:00:20.756862 kernel: loop: module loaded Jan 29 11:00:20.756871 kernel: fuse: init (API version 7.39) Jan 29 11:00:20.756881 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:00:20.756891 kernel: ACPI: bus type drm_connector registered Jan 29 11:00:20.756901 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:00:20.756912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:00:20.756924 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:00:20.756934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:00:20.756964 systemd-journald[1109]: Collecting audit messages is disabled. Jan 29 11:00:20.756985 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:00:20.756996 systemd[1]: Stopped verity-setup.service. Jan 29 11:00:20.757006 systemd-journald[1109]: Journal started Jan 29 11:00:20.757034 systemd-journald[1109]: Runtime Journal (/run/log/journal/e0e18c734f09435f91c4c86f18f68a2e) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:00:20.555175 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:00:20.577033 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:00:20.577388 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:00:20.761099 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:00:20.763032 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:00:20.764205 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:00:20.765426 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:00:20.766513 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:00:20.767759 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:00:20.769011 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:00:20.772113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:00:20.773523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:00:20.775021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:00:20.775191 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:00:20.776627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:00:20.776760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:00:20.778163 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:00:20.780114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:00:20.781428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:00:20.781559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:00:20.783167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:00:20.783330 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:00:20.784717 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:00:20.784860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:00:20.786233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:00:20.787792 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:00:20.789583 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:00:20.802726 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:00:20.808170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:00:20.810176 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:00:20.811273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:00:20.811304 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:00:20.813267 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:00:20.815393 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:00:20.817457 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:00:20.818577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:00:20.819795 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:00:20.821887 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:00:20.823169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:00:20.827259 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:00:20.829288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:00:20.830239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:00:20.831836 systemd-journald[1109]: Time spent on flushing to /var/log/journal/e0e18c734f09435f91c4c86f18f68a2e is 24.668ms for 858 entries. Jan 29 11:00:20.831836 systemd-journald[1109]: System Journal (/var/log/journal/e0e18c734f09435f91c4c86f18f68a2e) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:00:20.862980 systemd-journald[1109]: Received client request to flush runtime journal. Jan 29 11:00:20.837251 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:00:20.839564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:00:20.844735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:00:20.846468 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:00:20.847760 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:00:20.849177 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:00:20.861842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:00:20.864655 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:00:20.868317 kernel: loop0: detected capacity change from 0 to 201592 Jan 29 11:00:20.867628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:00:20.869387 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 29 11:00:20.869402 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 29 11:00:20.877267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:00:20.879176 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:00:20.880839 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:00:20.892273 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:00:20.894868 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:00:20.897367 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:00:20.904097 kernel: loop1: detected capacity change from 0 to 113552 Jan 29 11:00:20.909698 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:00:20.916918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:00:20.919613 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:00:20.927383 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:00:20.934279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:00:20.948666 kernel: loop2: detected capacity change from 0 to 116784 Jan 29 11:00:20.948489 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 29 11:00:20.948499 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 29 11:00:20.952716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:00:20.994112 kernel: loop3: detected capacity change from 0 to 201592 Jan 29 11:00:21.000100 kernel: loop4: detected capacity change from 0 to 113552 Jan 29 11:00:21.009094 kernel: loop5: detected capacity change from 0 to 116784 Jan 29 11:00:21.013227 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:00:21.013697 (sd-merge)[1181]: Merged extensions into '/usr'. Jan 29 11:00:21.018921 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:00:21.018940 systemd[1]: Reloading... Jan 29 11:00:21.078571 zram_generator::config[1206]: No configuration found. Jan 29 11:00:21.132704 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:00:21.178405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:00:21.214289 systemd[1]: Reloading finished in 194 ms. Jan 29 11:00:21.247844 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:00:21.251113 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:00:21.261255 systemd[1]: Starting ensure-sysext.service... Jan 29 11:00:21.263171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:00:21.275845 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:00:21.275868 systemd[1]: Reloading... Jan 29 11:00:21.286329 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:00:21.286561 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:00:21.287220 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:00:21.287435 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:00:21.287482 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:00:21.290303 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:00:21.290315 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:00:21.300104 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:00:21.300116 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:00:21.319114 zram_generator::config[1272]: No configuration found. Jan 29 11:00:21.399274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:00:21.435480 systemd[1]: Reloading finished in 159 ms. Jan 29 11:00:21.451333 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:00:21.463820 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:00:21.471489 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:00:21.473878 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:00:21.476717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:00:21.480413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:00:21.483403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:00:21.489316 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:00:21.495871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:00:21.501546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:00:21.505325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:00:21.510283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:00:21.514958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:00:21.516522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:00:21.519261 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:00:21.522120 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:00:21.523877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:00:21.523998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:00:21.525748 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:00:21.525883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:00:21.527519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:00:21.527636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:00:21.527810 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jan 29 11:00:21.529697 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:00:21.529816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:00:21.536117 systemd[1]: Finished ensure-sysext.service. Jan 29 11:00:21.542796 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:00:21.546861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:00:21.556283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:00:21.557182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:00:21.557258 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:00:21.562648 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:00:21.565018 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:00:21.566152 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:00:21.566555 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:00:21.567804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:00:21.587715 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:00:21.601536 augenrules[1372]: No rules Jan 29 11:00:21.602900 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:00:21.603087 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:00:21.608341 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:00:21.613093 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1342) Jan 29 11:00:21.659216 systemd-networkd[1344]: lo: Link UP Jan 29 11:00:21.659229 systemd-networkd[1344]: lo: Gained carrier Jan 29 11:00:21.661874 systemd-networkd[1344]: Enumeration completed Jan 29 11:00:21.661972 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:00:21.666532 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:00:21.666542 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:00:21.667145 systemd-networkd[1344]: eth0: Link UP Jan 29 11:00:21.667154 systemd-networkd[1344]: eth0: Gained carrier Jan 29 11:00:21.667167 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:00:21.677309 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:00:21.679802 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:00:21.682165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:00:21.684472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:00:21.689863 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:00:21.690561 systemd-resolved[1309]: Positive Trust Anchors: Jan 29 11:00:21.690801 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:00:21.690876 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:00:21.691111 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:00:21.698033 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 29 11:00:21.699857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:00:21.701989 systemd-networkd[1344]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:00:21.702159 systemd[1]: Reached target network.target - Network. Jan 29 11:00:21.703088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:00:21.704603 systemd-timesyncd[1359]: Network configuration changed, trying to establish connection. Jan 29 11:00:21.705881 systemd-timesyncd[1359]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:00:21.705990 systemd-timesyncd[1359]: Initial clock synchronization to Wed 2025-01-29 11:00:21.672922 UTC. Jan 29 11:00:21.710394 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:00:21.725313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:00:21.735206 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:00:21.738922 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:00:21.757897 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:00:21.780115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:00:21.794214 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:00:21.795705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:00:21.796866 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:00:21.798037 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:00:21.799270 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:00:21.800661 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:00:21.801856 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:00:21.803110 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:00:21.804309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:00:21.804347 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:00:21.805279 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:00:21.807119 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:00:21.809384 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:00:21.822312 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:00:21.824536 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:00:21.826135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:00:21.827281 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:00:21.828322 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:00:21.829271 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:00:21.829301 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:00:21.830268 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:00:21.833110 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:00:21.832215 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:00:21.834897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:00:21.838130 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:00:21.839829 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:00:21.841295 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:00:21.841947 jq[1407]: false Jan 29 11:00:21.844198 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:00:21.848165 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:00:21.851025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:00:21.855032 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:00:21.859021 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:00:21.859841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:00:21.861774 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:00:21.863614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:00:21.866097 extend-filesystems[1408]: Found loop3 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found loop4 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found loop5 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda1 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda2 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda3 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found usr Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda4 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda6 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda7 Jan 29 11:00:21.866097 extend-filesystems[1408]: Found vda9 Jan 29 11:00:21.866097 extend-filesystems[1408]: Checking size of /dev/vda9 Jan 29 11:00:21.865497 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:00:21.864459 dbus-daemon[1406]: [system] SELinux support is enabled Jan 29 11:00:21.883905 extend-filesystems[1408]: Resized partition /dev/vda9 Jan 29 11:00:21.870150 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:00:21.887653 jq[1424]: true Jan 29 11:00:21.887904 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:00:21.880422 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:00:21.882898 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:00:21.883187 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:00:21.883323 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:00:21.889471 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:00:21.889615 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:00:21.893127 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:00:21.909838 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:00:21.917763 tar[1431]: linux-arm64/LICENSE Jan 29 11:00:21.917763 tar[1431]: linux-arm64/helm Jan 29 11:00:21.916948 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:00:21.916976 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:00:21.918411 update_engine[1422]: I20250129 11:00:21.918267 1422 main.cc:92] Flatcar Update Engine starting Jan 29 11:00:21.919104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1355) Jan 29 11:00:21.936288 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:00:21.920739 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:00:21.936436 update_engine[1422]: I20250129 11:00:21.920333 1422 update_check_scheduler.cc:74] Next update check in 4m43s Jan 29 11:00:21.936466 jq[1432]: true Jan 29 11:00:21.920755 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:00:21.938508 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:00:21.938508 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:00:21.938508 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:00:21.924603 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:00:21.948046 extend-filesystems[1408]: Resized filesystem in /dev/vda9 Jan 29 11:00:21.929394 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:00:21.940060 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:00:21.942126 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:00:21.949946 systemd-logind[1416]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:00:21.950733 systemd-logind[1416]: New seat seat0. Jan 29 11:00:21.951839 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:00:21.992448 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:00:21.994526 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:00:21.996369 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:00:22.002970 locksmithd[1444]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:00:22.120593 containerd[1433]: time="2025-01-29T11:00:22.120500217Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:00:22.149202 containerd[1433]: time="2025-01-29T11:00:22.149099954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151315 containerd[1433]: time="2025-01-29T11:00:22.151279386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151315 containerd[1433]: time="2025-01-29T11:00:22.151310841Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:00:22.151416 containerd[1433]: time="2025-01-29T11:00:22.151326329Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:00:22.151499 containerd[1433]: time="2025-01-29T11:00:22.151481330Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:00:22.151528 containerd[1433]: time="2025-01-29T11:00:22.151503803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151602 containerd[1433]: time="2025-01-29T11:00:22.151584477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151602 containerd[1433]: time="2025-01-29T11:00:22.151600883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151779 containerd[1433]: time="2025-01-29T11:00:22.151759996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151808 containerd[1433]: time="2025-01-29T11:00:22.151779396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151808 containerd[1433]: time="2025-01-29T11:00:22.151792249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151808 containerd[1433]: time="2025-01-29T11:00:22.151801231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.151909 containerd[1433]: time="2025-01-29T11:00:22.151869610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.152124 containerd[1433]: time="2025-01-29T11:00:22.152055707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:00:22.152203 containerd[1433]: time="2025-01-29T11:00:22.152182885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:00:22.152203 containerd[1433]: time="2025-01-29T11:00:22.152202125Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:00:22.152319 containerd[1433]: time="2025-01-29T11:00:22.152280643Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:00:22.152347 containerd[1433]: time="2025-01-29T11:00:22.152326868Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:00:22.155923 containerd[1433]: time="2025-01-29T11:00:22.155867975Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:00:22.155923 containerd[1433]: time="2025-01-29T11:00:22.155916515Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:00:22.156013 containerd[1433]: time="2025-01-29T11:00:22.155931245Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:00:22.156013 containerd[1433]: time="2025-01-29T11:00:22.155945895Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:00:22.156013 containerd[1433]: time="2025-01-29T11:00:22.155959986Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:00:22.156156 containerd[1433]: time="2025-01-29T11:00:22.156122850Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156363036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156569371Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156588491Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156603979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156618350Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156631523Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156645574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156659824Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156674155Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156686849Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156698066Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156709203Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156729720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.156867 containerd[1433]: time="2025-01-29T11:00:22.156744929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156757104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156777821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156790276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156802890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156813787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156826002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156839933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156853785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156866040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156877496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156888952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156903842Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156924000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156937492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157190 containerd[1433]: time="2025-01-29T11:00:22.156948031Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157258711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157287252Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157297471Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157308967Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157318947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157347807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157359184Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:00:22.157464 containerd[1433]: time="2025-01-29T11:00:22.157369443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:00:22.159014 containerd[1433]: time="2025-01-29T11:00:22.157753931Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:00:22.159014 containerd[1433]: time="2025-01-29T11:00:22.157863545Z" level=info msg="Connect containerd service" Jan 29 11:00:22.159014 containerd[1433]: time="2025-01-29T11:00:22.157901068Z" level=info msg="using legacy CRI server" Jan 29 11:00:22.159014 containerd[1433]: time="2025-01-29T11:00:22.157908533Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:00:22.159014 containerd[1433]: time="2025-01-29T11:00:22.158303399Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:00:22.159543 containerd[1433]: time="2025-01-29T11:00:22.159494666Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:00:22.160375 containerd[1433]: time="2025-01-29T11:00:22.159751377Z" level=info msg="Start subscribing containerd event" Jan 29 11:00:22.160375 containerd[1433]: time="2025-01-29T11:00:22.159802791Z" level=info msg="Start recovering state" Jan 29 11:00:22.160375 containerd[1433]: time="2025-01-29T11:00:22.159866141Z" level=info msg="Start event monitor" Jan 29 11:00:22.160375 containerd[1433]: time="2025-01-29T11:00:22.159877238Z" level=info msg="Start snapshots syncer" Jan 29 11:00:22.160375 containerd[1433]: time="2025-01-29T11:00:22.159887137Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:00:22.160375 containerd[1433]: time="2025-01-29T11:00:22.159894323Z" level=info msg="Start streaming server" Jan 29 11:00:22.160497 containerd[1433]: time="2025-01-29T11:00:22.160392696Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:00:22.160497 containerd[1433]: time="2025-01-29T11:00:22.160441715Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:00:22.160574 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:00:22.163386 containerd[1433]: time="2025-01-29T11:00:22.163267336Z" level=info msg="containerd successfully booted in 0.044832s" Jan 29 11:00:22.329233 tar[1431]: linux-arm64/README.md Jan 29 11:00:22.341339 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:00:22.598785 sshd_keygen[1423]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:00:22.617448 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:00:22.633342 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:00:22.638678 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:00:22.638911 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:00:22.641770 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:00:22.658525 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:00:22.671350 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:00:22.673395 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:00:22.674595 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:00:22.898191 systemd-networkd[1344]: eth0: Gained IPv6LL Jan 29 11:00:22.900928 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:00:22.902720 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:00:22.913296 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:00:22.915578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:22.917538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:00:22.931927 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:00:22.932144 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:00:22.935372 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:00:22.937699 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:00:23.443580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:23.445176 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:00:23.447021 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:00:23.447196 systemd[1]: Startup finished in 608ms (kernel) + 4.482s (initrd) + 3.296s (userspace) = 8.388s. Jan 29 11:00:23.468612 agetty[1495]: failed to open credentials directory Jan 29 11:00:23.468662 agetty[1496]: failed to open credentials directory Jan 29 11:00:23.853259 kubelet[1519]: E0129 11:00:23.853102 1519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:00:23.855414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:00:23.855557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:00:28.227034 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:00:28.228524 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:36814.service - OpenSSH per-connection server daemon (10.0.0.1:36814). Jan 29 11:00:28.320434 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 36814 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:28.322702 sshd-session[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:28.331774 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:00:28.352920 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:00:28.355132 systemd-logind[1416]: New session 1 of user core. Jan 29 11:00:28.364025 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:00:28.367904 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:00:28.376785 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:00:28.470496 systemd[1537]: Queued start job for default target default.target. Jan 29 11:00:28.482112 systemd[1537]: Created slice app.slice - User Application Slice. Jan 29 11:00:28.482175 systemd[1537]: Reached target paths.target - Paths. Jan 29 11:00:28.482190 systemd[1537]: Reached target timers.target - Timers. Jan 29 11:00:28.483538 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:00:28.494838 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:00:28.494970 systemd[1537]: Reached target sockets.target - Sockets. Jan 29 11:00:28.494983 systemd[1537]: Reached target basic.target - Basic System. Jan 29 11:00:28.495025 systemd[1537]: Reached target default.target - Main User Target. Jan 29 11:00:28.495055 systemd[1537]: Startup finished in 109ms. Jan 29 11:00:28.495295 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:00:28.496746 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:00:28.558912 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:36826.service - OpenSSH per-connection server daemon (10.0.0.1:36826). Jan 29 11:00:28.604863 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 36826 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:28.606215 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:28.610245 systemd-logind[1416]: New session 2 of user core. Jan 29 11:00:28.627255 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:00:28.680903 sshd[1550]: Connection closed by 10.0.0.1 port 36826 Jan 29 11:00:28.681579 sshd-session[1548]: pam_unix(sshd:session): session closed for user core Jan 29 11:00:28.691642 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:36826.service: Deactivated successfully. Jan 29 11:00:28.694662 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:00:28.696015 systemd-logind[1416]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:00:28.710420 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:36830.service - OpenSSH per-connection server daemon (10.0.0.1:36830). Jan 29 11:00:28.711323 systemd-logind[1416]: Removed session 2. Jan 29 11:00:28.752192 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 36830 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:28.753407 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:28.757184 systemd-logind[1416]: New session 3 of user core. Jan 29 11:00:28.765251 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:00:28.813123 sshd[1557]: Connection closed by 10.0.0.1 port 36830 Jan 29 11:00:28.813005 sshd-session[1555]: pam_unix(sshd:session): session closed for user core Jan 29 11:00:28.826595 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:36830.service: Deactivated successfully. Jan 29 11:00:28.829750 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:00:28.831126 systemd-logind[1416]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:00:28.832557 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). Jan 29 11:00:28.833446 systemd-logind[1416]: Removed session 3. Jan 29 11:00:28.881938 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:28.883292 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:28.887614 systemd-logind[1416]: New session 4 of user core. Jan 29 11:00:28.898259 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:00:28.950615 sshd[1564]: Connection closed by 10.0.0.1 port 36846 Jan 29 11:00:28.951190 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Jan 29 11:00:28.969703 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:36846.service: Deactivated successfully. Jan 29 11:00:28.971679 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:00:28.974114 systemd-logind[1416]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:00:28.975492 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:36856.service - OpenSSH per-connection server daemon (10.0.0.1:36856). Jan 29 11:00:28.978783 systemd-logind[1416]: Removed session 4. Jan 29 11:00:29.023542 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 36856 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:29.024792 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:29.029119 systemd-logind[1416]: New session 5 of user core. Jan 29 11:00:29.044269 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:00:29.105378 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:00:29.105671 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:00:29.131185 sudo[1572]: pam_unix(sudo:session): session closed for user root Jan 29 11:00:29.134200 sshd[1571]: Connection closed by 10.0.0.1 port 36856 Jan 29 11:00:29.134007 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jan 29 11:00:29.144605 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:36856.service: Deactivated successfully. Jan 29 11:00:29.146193 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:00:29.148280 systemd-logind[1416]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:00:29.149942 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:36858.service - OpenSSH per-connection server daemon (10.0.0.1:36858). Jan 29 11:00:29.150819 systemd-logind[1416]: Removed session 5. Jan 29 11:00:29.198333 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 36858 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:29.199745 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:29.204532 systemd-logind[1416]: New session 6 of user core. Jan 29 11:00:29.215296 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:00:29.267496 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:00:29.267763 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:00:29.271327 sudo[1581]: pam_unix(sudo:session): session closed for user root Jan 29 11:00:29.276247 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:00:29.276539 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:00:29.292460 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:00:29.317444 augenrules[1603]: No rules Jan 29 11:00:29.318182 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:00:29.318379 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:00:29.319384 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 29 11:00:29.321582 sshd[1579]: Connection closed by 10.0.0.1 port 36858 Jan 29 11:00:29.321458 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Jan 29 11:00:29.328648 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:36858.service: Deactivated successfully. Jan 29 11:00:29.330403 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:00:29.331718 systemd-logind[1416]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:00:29.333230 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:36866.service - OpenSSH per-connection server daemon (10.0.0.1:36866). Jan 29 11:00:29.333911 systemd-logind[1416]: Removed session 6. Jan 29 11:00:29.379321 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 36866 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:00:29.380663 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:00:29.384630 systemd-logind[1416]: New session 7 of user core. Jan 29 11:00:29.396271 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:00:29.447588 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:00:29.447887 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:00:29.769388 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:00:29.769551 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:00:30.012516 dockerd[1635]: time="2025-01-29T11:00:30.012446587Z" level=info msg="Starting up" Jan 29 11:00:30.148320 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2709261430-merged.mount: Deactivated successfully. Jan 29 11:00:30.168378 dockerd[1635]: time="2025-01-29T11:00:30.168327412Z" level=info msg="Loading containers: start." Jan 29 11:00:30.320132 kernel: Initializing XFRM netlink socket Jan 29 11:00:30.393211 systemd-networkd[1344]: docker0: Link UP Jan 29 11:00:30.424494 dockerd[1635]: time="2025-01-29T11:00:30.424373439Z" level=info msg="Loading containers: done." Jan 29 11:00:30.439874 dockerd[1635]: time="2025-01-29T11:00:30.439806169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:00:30.440162 dockerd[1635]: time="2025-01-29T11:00:30.439916633Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:00:30.440262 dockerd[1635]: time="2025-01-29T11:00:30.440240316Z" level=info msg="Daemon has completed initialization" Jan 29 11:00:30.470555 dockerd[1635]: time="2025-01-29T11:00:30.470489371Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:00:30.470729 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:00:30.938030 containerd[1433]: time="2025-01-29T11:00:30.937943319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:00:31.644649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336937748.mount: Deactivated successfully. Jan 29 11:00:32.955627 containerd[1433]: time="2025-01-29T11:00:32.955569207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:32.957117 containerd[1433]: time="2025-01-29T11:00:32.957062756Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220950" Jan 29 11:00:32.958226 containerd[1433]: time="2025-01-29T11:00:32.958165606Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:32.962212 containerd[1433]: time="2025-01-29T11:00:32.962173880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:32.962857 containerd[1433]: time="2025-01-29T11:00:32.962796329Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 2.024807743s" Jan 29 11:00:32.962857 containerd[1433]: time="2025-01-29T11:00:32.962835926Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 11:00:32.963519 containerd[1433]: time="2025-01-29T11:00:32.963494136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:00:34.105896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:00:34.120362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:34.229905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:34.234365 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:00:34.355186 kubelet[1898]: E0129 11:00:34.355135 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:00:34.358487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:00:34.358642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:00:34.604155 containerd[1433]: time="2025-01-29T11:00:34.604101049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:34.605596 containerd[1433]: time="2025-01-29T11:00:34.605514909Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527109" Jan 29 11:00:34.606585 containerd[1433]: time="2025-01-29T11:00:34.606549567Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:34.610217 containerd[1433]: time="2025-01-29T11:00:34.609023222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:34.613811 containerd[1433]: time="2025-01-29T11:00:34.613754015Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.650226594s" Jan 29 11:00:34.613912 containerd[1433]: time="2025-01-29T11:00:34.613819833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 11:00:34.614809 containerd[1433]: time="2025-01-29T11:00:34.614602651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:00:36.091620 containerd[1433]: time="2025-01-29T11:00:36.091558349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:36.092055 containerd[1433]: time="2025-01-29T11:00:36.092024960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481115" Jan 29 11:00:36.092866 containerd[1433]: time="2025-01-29T11:00:36.092813543Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:36.095812 containerd[1433]: time="2025-01-29T11:00:36.095778313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:36.096927 containerd[1433]: time="2025-01-29T11:00:36.096892664Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.482255724s" Jan 29 11:00:36.096962 containerd[1433]: time="2025-01-29T11:00:36.096926476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 11:00:36.097936 containerd[1433]: time="2025-01-29T11:00:36.097858579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:00:37.319889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103188096.mount: Deactivated successfully. Jan 29 11:00:37.540293 containerd[1433]: time="2025-01-29T11:00:37.540233381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:37.541403 containerd[1433]: time="2025-01-29T11:00:37.541356103Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 29 11:00:37.542002 containerd[1433]: time="2025-01-29T11:00:37.541954836Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:37.543811 containerd[1433]: time="2025-01-29T11:00:37.543771576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:37.544746 containerd[1433]: time="2025-01-29T11:00:37.544700571Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.446811017s" Jan 29 11:00:37.544746 containerd[1433]: time="2025-01-29T11:00:37.544737142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 11:00:37.545277 containerd[1433]: time="2025-01-29T11:00:37.545202858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:00:38.343597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134337676.mount: Deactivated successfully. Jan 29 11:00:39.644280 containerd[1433]: time="2025-01-29T11:00:39.644224203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:39.645345 containerd[1433]: time="2025-01-29T11:00:39.645204930Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jan 29 11:00:39.646327 containerd[1433]: time="2025-01-29T11:00:39.646281631Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:39.649182 containerd[1433]: time="2025-01-29T11:00:39.649130755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:39.650494 containerd[1433]: time="2025-01-29T11:00:39.650454806Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.105224128s" Jan 29 11:00:39.650494 containerd[1433]: time="2025-01-29T11:00:39.650493339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 11:00:39.651060 containerd[1433]: time="2025-01-29T11:00:39.651031610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:00:40.192473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947184307.mount: Deactivated successfully. Jan 29 11:00:40.197175 containerd[1433]: time="2025-01-29T11:00:40.197135860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:40.197874 containerd[1433]: time="2025-01-29T11:00:40.197687505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 11:00:40.198631 containerd[1433]: time="2025-01-29T11:00:40.198564300Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:40.201500 containerd[1433]: time="2025-01-29T11:00:40.201338515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:40.202904 containerd[1433]: time="2025-01-29T11:00:40.202876645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 551.807901ms" Jan 29 11:00:40.203103 containerd[1433]: time="2025-01-29T11:00:40.202999526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:00:40.203507 containerd[1433]: time="2025-01-29T11:00:40.203471422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:00:40.921544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076252898.mount: Deactivated successfully. Jan 29 11:00:43.811279 containerd[1433]: time="2025-01-29T11:00:43.811212758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:43.811883 containerd[1433]: time="2025-01-29T11:00:43.811840345Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Jan 29 11:00:43.812671 containerd[1433]: time="2025-01-29T11:00:43.812642240Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:43.815604 containerd[1433]: time="2025-01-29T11:00:43.815572846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:00:43.819287 containerd[1433]: time="2025-01-29T11:00:43.817177875Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.613673473s" Jan 29 11:00:43.819287 containerd[1433]: time="2025-01-29T11:00:43.817228968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 11:00:44.377509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:00:44.386245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:44.485297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:44.488815 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:00:44.525487 kubelet[2065]: E0129 11:00:44.525398 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:00:44.528049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:00:44.528227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:00:48.476140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:48.487363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:48.508243 systemd[1]: Reloading requested from client PID 2081 ('systemctl') (unit session-7.scope)... Jan 29 11:00:48.508260 systemd[1]: Reloading... Jan 29 11:00:48.575096 zram_generator::config[2120]: No configuration found. Jan 29 11:00:48.752545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:00:48.804499 systemd[1]: Reloading finished in 295 ms. Jan 29 11:00:48.857800 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:00:48.857905 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:00:48.858270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:48.860057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:48.966888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:48.971547 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:00:49.012445 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:00:49.012445 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:00:49.012445 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:00:49.012745 kubelet[2164]: I0129 11:00:49.012448 2164 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:00:49.791328 kubelet[2164]: I0129 11:00:49.791281 2164 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:00:49.791328 kubelet[2164]: I0129 11:00:49.791314 2164 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:00:49.791612 kubelet[2164]: I0129 11:00:49.791580 2164 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:00:49.820715 kubelet[2164]: I0129 11:00:49.820680 2164 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:00:49.821437 kubelet[2164]: E0129 11:00:49.821406 2164 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:49.832589 kubelet[2164]: E0129 11:00:49.832552 2164 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:00:49.832589 kubelet[2164]: I0129 11:00:49.832583 2164 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:00:49.835167 kubelet[2164]: I0129 11:00:49.835146 2164 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:00:49.835781 kubelet[2164]: I0129 11:00:49.835740 2164 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:00:49.835953 kubelet[2164]: I0129 11:00:49.835779 2164 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:00:49.836034 kubelet[2164]: I0129 11:00:49.836021 2164 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:00:49.836034 kubelet[2164]: I0129 11:00:49.836030 2164 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:00:49.836269 kubelet[2164]: I0129 11:00:49.836243 2164 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:00:49.840377 kubelet[2164]: I0129 11:00:49.840351 2164 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:00:49.840377 kubelet[2164]: I0129 11:00:49.840377 2164 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:00:49.840448 kubelet[2164]: I0129 11:00:49.840399 2164 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:00:49.840448 kubelet[2164]: I0129 11:00:49.840409 2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:00:49.843463 kubelet[2164]: I0129 11:00:49.843018 2164 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:00:49.843463 kubelet[2164]: W0129 11:00:49.843258 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:49.843463 kubelet[2164]: E0129 11:00:49.843308 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:49.843586 kubelet[2164]: W0129 11:00:49.843552 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:49.843608 kubelet[2164]: E0129 11:00:49.843584 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:49.843752 kubelet[2164]: I0129 11:00:49.843739 2164 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:00:49.843885 kubelet[2164]: W0129 11:00:49.843873 2164 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:00:49.844814 kubelet[2164]: I0129 11:00:49.844659 2164 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:00:49.844814 kubelet[2164]: I0129 11:00:49.844696 2164 server.go:1287] "Started kubelet" Jan 29 11:00:49.845927 kubelet[2164]: I0129 11:00:49.845806 2164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:00:49.846632 kubelet[2164]: I0129 11:00:49.846127 2164 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:00:49.846632 kubelet[2164]: I0129 11:00:49.846187 2164 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:00:49.848303 kubelet[2164]: I0129 11:00:49.848273 2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:00:49.848826 kubelet[2164]: I0129 11:00:49.848805 2164 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:00:49.850714 kubelet[2164]: I0129 11:00:49.850686 2164 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:00:49.851008 kubelet[2164]: I0129 11:00:49.850979 2164 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:00:49.851292 kubelet[2164]: E0129 11:00:49.850992 2164 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f24ce57e33d1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:00:49.844673818 +0000 UTC m=+0.869630251,LastTimestamp:2025-01-29 11:00:49.844673818 +0000 UTC m=+0.869630251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:00:49.851384 kubelet[2164]: E0129 11:00:49.851297 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:49.851638 kubelet[2164]: I0129 11:00:49.851609 2164 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:00:49.851702 kubelet[2164]: I0129 11:00:49.851688 2164 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:00:49.853606 kubelet[2164]: W0129 11:00:49.852314 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:49.853606 kubelet[2164]: E0129 11:00:49.852374 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:49.853606 kubelet[2164]: E0129 11:00:49.852451 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" Jan 29 11:00:49.853606 kubelet[2164]: I0129 11:00:49.852857 2164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:00:49.853932 kubelet[2164]: E0129 11:00:49.853900 2164 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:00:49.855297 kubelet[2164]: I0129 11:00:49.855270 2164 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:00:49.855297 kubelet[2164]: I0129 11:00:49.855290 2164 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:00:49.866187 kubelet[2164]: I0129 11:00:49.866158 2164 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:00:49.866187 kubelet[2164]: I0129 11:00:49.866175 2164 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:00:49.866187 kubelet[2164]: I0129 11:00:49.866193 2164 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:00:49.868579 kubelet[2164]: I0129 11:00:49.868531 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:00:49.869692 kubelet[2164]: I0129 11:00:49.869623 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:00:49.869692 kubelet[2164]: I0129 11:00:49.869652 2164 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:00:49.869692 kubelet[2164]: I0129 11:00:49.869679 2164 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:00:49.869692 kubelet[2164]: I0129 11:00:49.869687 2164 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:00:49.869834 kubelet[2164]: E0129 11:00:49.869728 2164 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:00:49.870368 kubelet[2164]: W0129 11:00:49.870305 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:49.870529 kubelet[2164]: E0129 11:00:49.870347 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:49.951791 kubelet[2164]: E0129 11:00:49.951751 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:49.969956 kubelet[2164]: E0129 11:00:49.969928 2164 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:00:50.051993 kubelet[2164]: E0129 11:00:50.051875 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:50.053598 kubelet[2164]: E0129 11:00:50.053472 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" Jan 29 11:00:50.064897 kubelet[2164]: I0129 11:00:50.064864 2164 policy_none.go:49] "None policy: Start" Jan 29 11:00:50.064897 kubelet[2164]: I0129 11:00:50.064891 2164 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:00:50.064897 kubelet[2164]: I0129 11:00:50.064905 2164 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:00:50.070935 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:00:50.084565 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:00:50.087334 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:00:50.097794 kubelet[2164]: I0129 11:00:50.097751 2164 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:00:50.097993 kubelet[2164]: I0129 11:00:50.097970 2164 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:00:50.098026 kubelet[2164]: I0129 11:00:50.097985 2164 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:00:50.098263 kubelet[2164]: I0129 11:00:50.098193 2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:00:50.099663 kubelet[2164]: E0129 11:00:50.099559 2164 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:00:50.099663 kubelet[2164]: E0129 11:00:50.099593 2164 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:00:50.179927 systemd[1]: Created slice kubepods-burstable-pod85455f2c1c23e8ca9a683e484234e730.slice - libcontainer container kubepods-burstable-pod85455f2c1c23e8ca9a683e484234e730.slice. Jan 29 11:00:50.199060 kubelet[2164]: I0129 11:00:50.199012 2164 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:00:50.199467 kubelet[2164]: E0129 11:00:50.199436 2164 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jan 29 11:00:50.202560 kubelet[2164]: E0129 11:00:50.202528 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:50.205357 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 11:00:50.213249 kubelet[2164]: E0129 11:00:50.213227 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:50.215962 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 11:00:50.217453 kubelet[2164]: E0129 11:00:50.217427 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:50.353484 kubelet[2164]: I0129 11:00:50.353295 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:50.353484 kubelet[2164]: I0129 11:00:50.353335 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:50.353484 kubelet[2164]: I0129 11:00:50.353359 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:50.353484 kubelet[2164]: I0129 11:00:50.353393 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85455f2c1c23e8ca9a683e484234e730-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85455f2c1c23e8ca9a683e484234e730\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:50.353484 kubelet[2164]: I0129 11:00:50.353410 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85455f2c1c23e8ca9a683e484234e730-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85455f2c1c23e8ca9a683e484234e730\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:50.353729 kubelet[2164]: I0129 11:00:50.353425 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:50.353729 kubelet[2164]: I0129 11:00:50.353444 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:50.353729 kubelet[2164]: I0129 11:00:50.353460 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:50.353729 kubelet[2164]: I0129 11:00:50.353475 2164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85455f2c1c23e8ca9a683e484234e730-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85455f2c1c23e8ca9a683e484234e730\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:50.401170 kubelet[2164]: I0129 11:00:50.401147 2164 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:00:50.401503 kubelet[2164]: E0129 11:00:50.401475 2164 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jan 29 11:00:50.454458 kubelet[2164]: E0129 11:00:50.454425 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" Jan 29 11:00:50.503843 kubelet[2164]: E0129 11:00:50.503805 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:50.504651 containerd[1433]: time="2025-01-29T11:00:50.504611115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85455f2c1c23e8ca9a683e484234e730,Namespace:kube-system,Attempt:0,}" Jan 29 11:00:50.513691 kubelet[2164]: E0129 11:00:50.513667 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:50.514717 containerd[1433]: time="2025-01-29T11:00:50.514684036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 11:00:50.517971 kubelet[2164]: E0129 11:00:50.517934 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:50.518306 containerd[1433]: time="2025-01-29T11:00:50.518270305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 11:00:50.803110 kubelet[2164]: I0129 11:00:50.802978 2164 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:00:50.803342 kubelet[2164]: E0129 11:00:50.803301 2164 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jan 29 11:00:51.037873 kubelet[2164]: W0129 11:00:51.037827 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:51.037873 kubelet[2164]: E0129 11:00:51.037872 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:51.056255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655852986.mount: Deactivated successfully. Jan 29 11:00:51.062192 containerd[1433]: time="2025-01-29T11:00:51.062107300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:00:51.064450 containerd[1433]: time="2025-01-29T11:00:51.064370984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:00:51.065286 containerd[1433]: time="2025-01-29T11:00:51.065254464Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:00:51.066754 containerd[1433]: time="2025-01-29T11:00:51.066717841Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:00:51.067431 containerd[1433]: time="2025-01-29T11:00:51.067264468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:00:51.068212 containerd[1433]: time="2025-01-29T11:00:51.068163904Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:00:51.068889 containerd[1433]: time="2025-01-29T11:00:51.068681500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:00:51.071018 containerd[1433]: time="2025-01-29T11:00:51.070975934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:00:51.072683 containerd[1433]: time="2025-01-29T11:00:51.072636289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.299246ms" Jan 29 11:00:51.073315 containerd[1433]: time="2025-01-29T11:00:51.073265370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.565604ms" Jan 29 11:00:51.075773 containerd[1433]: time="2025-01-29T11:00:51.075731270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.977777ms" Jan 29 11:00:51.204257 containerd[1433]: time="2025-01-29T11:00:51.203588821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:00:51.204257 containerd[1433]: time="2025-01-29T11:00:51.203670595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:00:51.204257 containerd[1433]: time="2025-01-29T11:00:51.203686230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:00:51.204257 containerd[1433]: time="2025-01-29T11:00:51.203823666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:00:51.205095 containerd[1433]: time="2025-01-29T11:00:51.204877693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:00:51.205095 containerd[1433]: time="2025-01-29T11:00:51.204929157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:00:51.205095 containerd[1433]: time="2025-01-29T11:00:51.204943912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:00:51.205095 containerd[1433]: time="2025-01-29T11:00:51.205016329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:00:51.206363 containerd[1433]: time="2025-01-29T11:00:51.202993969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:00:51.206449 containerd[1433]: time="2025-01-29T11:00:51.206351387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:00:51.206449 containerd[1433]: time="2025-01-29T11:00:51.206372900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:00:51.206522 containerd[1433]: time="2025-01-29T11:00:51.206458633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:00:51.218658 kubelet[2164]: W0129 11:00:51.218580 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:51.218658 kubelet[2164]: E0129 11:00:51.218652 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:51.235217 systemd[1]: Started cri-containerd-1fd5e1662f7837f9fe6f201d064c38904540aef670d50a2e84cf80a320df9b79.scope - libcontainer container 1fd5e1662f7837f9fe6f201d064c38904540aef670d50a2e84cf80a320df9b79. Jan 29 11:00:51.236237 systemd[1]: Started cri-containerd-2120d18824212c4b864d4ac15b293bdd2bc6baabb7f7f4619aa66a7c9f802962.scope - libcontainer container 2120d18824212c4b864d4ac15b293bdd2bc6baabb7f7f4619aa66a7c9f802962. Jan 29 11:00:51.237258 systemd[1]: Started cri-containerd-c38de21acf98f0c9768edaa77eb51280a9a7ddc29d05edaef5ff406d8d0f7c72.scope - libcontainer container c38de21acf98f0c9768edaa77eb51280a9a7ddc29d05edaef5ff406d8d0f7c72. Jan 29 11:00:51.255684 kubelet[2164]: E0129 11:00:51.255642 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="1.6s" Jan 29 11:00:51.272416 kubelet[2164]: W0129 11:00:51.272359 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:51.272416 kubelet[2164]: E0129 11:00:51.272428 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:51.272648 containerd[1433]: time="2025-01-29T11:00:51.272523052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"2120d18824212c4b864d4ac15b293bdd2bc6baabb7f7f4619aa66a7c9f802962\"" Jan 29 11:00:51.273629 containerd[1433]: time="2025-01-29T11:00:51.273596753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd5e1662f7837f9fe6f201d064c38904540aef670d50a2e84cf80a320df9b79\"" Jan 29 11:00:51.274447 kubelet[2164]: E0129 11:00:51.274412 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:51.274447 kubelet[2164]: E0129 11:00:51.274430 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:51.276383 containerd[1433]: time="2025-01-29T11:00:51.276353561Z" level=info msg="CreateContainer within sandbox \"1fd5e1662f7837f9fe6f201d064c38904540aef670d50a2e84cf80a320df9b79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:00:51.277017 containerd[1433]: time="2025-01-29T11:00:51.276978763Z" level=info msg="CreateContainer within sandbox \"2120d18824212c4b864d4ac15b293bdd2bc6baabb7f7f4619aa66a7c9f802962\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:00:51.279617 containerd[1433]: time="2025-01-29T11:00:51.279592496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85455f2c1c23e8ca9a683e484234e730,Namespace:kube-system,Attempt:0,} returns sandbox id \"c38de21acf98f0c9768edaa77eb51280a9a7ddc29d05edaef5ff406d8d0f7c72\"" Jan 29 11:00:51.280382 kubelet[2164]: E0129 11:00:51.280364 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:51.282329 containerd[1433]: time="2025-01-29T11:00:51.282220305Z" level=info msg="CreateContainer within sandbox \"c38de21acf98f0c9768edaa77eb51280a9a7ddc29d05edaef5ff406d8d0f7c72\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:00:51.290988 containerd[1433]: time="2025-01-29T11:00:51.290935148Z" level=info msg="CreateContainer within sandbox \"1fd5e1662f7837f9fe6f201d064c38904540aef670d50a2e84cf80a320df9b79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"caa368bb1d4111fda17555629aeab78b416112ef93c3103c8f77ff8f7f2d7751\"" Jan 29 11:00:51.291625 containerd[1433]: time="2025-01-29T11:00:51.291566788Z" level=info msg="StartContainer for \"caa368bb1d4111fda17555629aeab78b416112ef93c3103c8f77ff8f7f2d7751\"" Jan 29 11:00:51.295337 containerd[1433]: time="2025-01-29T11:00:51.295291929Z" level=info msg="CreateContainer within sandbox \"2120d18824212c4b864d4ac15b293bdd2bc6baabb7f7f4619aa66a7c9f802962\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c13b49f288a8548dd7f9d7feb0d827f2adc69e5e363f97277ef1975d9328a4f\"" Jan 29 11:00:51.297460 containerd[1433]: time="2025-01-29T11:00:51.296358792Z" level=info msg="CreateContainer within sandbox \"c38de21acf98f0c9768edaa77eb51280a9a7ddc29d05edaef5ff406d8d0f7c72\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5561be1d7214d2b8e4beb740b9f3e4d632e5da2cb3435e6ccf5b5fa94dc2ce3c\"" Jan 29 11:00:51.297460 containerd[1433]: time="2025-01-29T11:00:51.296440646Z" level=info msg="StartContainer for \"8c13b49f288a8548dd7f9d7feb0d827f2adc69e5e363f97277ef1975d9328a4f\"" Jan 29 11:00:51.297460 containerd[1433]: time="2025-01-29T11:00:51.296680450Z" level=info msg="StartContainer for \"5561be1d7214d2b8e4beb740b9f3e4d632e5da2cb3435e6ccf5b5fa94dc2ce3c\"" Jan 29 11:00:51.323275 systemd[1]: Started cri-containerd-8c13b49f288a8548dd7f9d7feb0d827f2adc69e5e363f97277ef1975d9328a4f.scope - libcontainer container 8c13b49f288a8548dd7f9d7feb0d827f2adc69e5e363f97277ef1975d9328a4f. Jan 29 11:00:51.324314 systemd[1]: Started cri-containerd-caa368bb1d4111fda17555629aeab78b416112ef93c3103c8f77ff8f7f2d7751.scope - libcontainer container caa368bb1d4111fda17555629aeab78b416112ef93c3103c8f77ff8f7f2d7751. Jan 29 11:00:51.327925 systemd[1]: Started cri-containerd-5561be1d7214d2b8e4beb740b9f3e4d632e5da2cb3435e6ccf5b5fa94dc2ce3c.scope - libcontainer container 5561be1d7214d2b8e4beb740b9f3e4d632e5da2cb3435e6ccf5b5fa94dc2ce3c. Jan 29 11:00:51.394845 containerd[1433]: time="2025-01-29T11:00:51.394805127Z" level=info msg="StartContainer for \"5561be1d7214d2b8e4beb740b9f3e4d632e5da2cb3435e6ccf5b5fa94dc2ce3c\" returns successfully" Jan 29 11:00:51.395163 containerd[1433]: time="2025-01-29T11:00:51.395026217Z" level=info msg="StartContainer for \"8c13b49f288a8548dd7f9d7feb0d827f2adc69e5e363f97277ef1975d9328a4f\" returns successfully" Jan 29 11:00:51.395251 containerd[1433]: time="2025-01-29T11:00:51.395030096Z" level=info msg="StartContainer for \"caa368bb1d4111fda17555629aeab78b416112ef93c3103c8f77ff8f7f2d7751\" returns successfully" Jan 29 11:00:51.445137 kubelet[2164]: W0129 11:00:51.443275 2164 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jan 29 11:00:51.445137 kubelet[2164]: E0129 11:00:51.443348 2164 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:00:51.605554 kubelet[2164]: I0129 11:00:51.605524 2164 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:00:51.606049 kubelet[2164]: E0129 11:00:51.606013 2164 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jan 29 11:00:51.878356 kubelet[2164]: E0129 11:00:51.878257 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:51.878449 kubelet[2164]: E0129 11:00:51.878413 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:51.881444 kubelet[2164]: E0129 11:00:51.881419 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:51.881552 kubelet[2164]: E0129 11:00:51.881535 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:51.884270 kubelet[2164]: E0129 11:00:51.884194 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:51.884395 kubelet[2164]: E0129 11:00:51.884295 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:52.885885 kubelet[2164]: E0129 11:00:52.885705 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:52.885885 kubelet[2164]: E0129 11:00:52.885821 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:52.886624 kubelet[2164]: E0129 11:00:52.886455 2164 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:00:52.886624 kubelet[2164]: E0129 11:00:52.886571 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:53.207458 kubelet[2164]: I0129 11:00:53.207169 2164 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:00:53.240654 kubelet[2164]: E0129 11:00:53.240619 2164 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:00:53.332397 kubelet[2164]: I0129 11:00:53.332357 2164 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:00:53.332397 kubelet[2164]: E0129 11:00:53.332395 2164 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:00:53.335159 kubelet[2164]: E0129 11:00:53.335130 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:53.435959 kubelet[2164]: E0129 11:00:53.435910 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:53.536561 kubelet[2164]: E0129 11:00:53.536454 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:53.636856 kubelet[2164]: E0129 11:00:53.636802 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:53.737861 kubelet[2164]: E0129 11:00:53.737813 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:53.838523 kubelet[2164]: E0129 11:00:53.838481 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:53.938607 kubelet[2164]: E0129 11:00:53.938558 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:54.039404 kubelet[2164]: E0129 11:00:54.039354 2164 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:54.152353 kubelet[2164]: I0129 11:00:54.152250 2164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:54.161687 kubelet[2164]: I0129 11:00:54.161652 2164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:54.166885 kubelet[2164]: I0129 11:00:54.166846 2164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:54.410104 kubelet[2164]: I0129 11:00:54.408770 2164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:54.414870 kubelet[2164]: E0129 11:00:54.414493 2164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:54.415806 kubelet[2164]: E0129 11:00:54.415739 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:54.846341 kubelet[2164]: I0129 11:00:54.846285 2164 apiserver.go:52] "Watching apiserver" Jan 29 11:00:54.848327 kubelet[2164]: E0129 11:00:54.848250 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:54.848409 kubelet[2164]: E0129 11:00:54.848371 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:54.851901 kubelet[2164]: I0129 11:00:54.851868 2164 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:00:54.887818 kubelet[2164]: E0129 11:00:54.887792 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:55.434751 systemd[1]: Reloading requested from client PID 2444 ('systemctl') (unit session-7.scope)... Jan 29 11:00:55.434770 systemd[1]: Reloading... Jan 29 11:00:55.503119 zram_generator::config[2483]: No configuration found. Jan 29 11:00:55.584232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:00:55.648032 systemd[1]: Reloading finished in 212 ms. Jan 29 11:00:55.680384 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:55.696266 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:00:55.696541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:55.696594 systemd[1]: kubelet.service: Consumed 1.236s CPU time, 122.5M memory peak, 0B memory swap peak. Jan 29 11:00:55.710325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:00:55.805295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:00:55.810200 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:00:55.848971 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:00:55.848971 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:00:55.848971 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:00:55.849403 kubelet[2525]: I0129 11:00:55.849136 2525 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:00:55.856182 kubelet[2525]: I0129 11:00:55.856141 2525 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:00:55.856182 kubelet[2525]: I0129 11:00:55.856174 2525 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:00:55.856449 kubelet[2525]: I0129 11:00:55.856424 2525 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:00:55.857696 kubelet[2525]: I0129 11:00:55.857674 2525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:00:55.860262 kubelet[2525]: I0129 11:00:55.860018 2525 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:00:55.862681 kubelet[2525]: E0129 11:00:55.862646 2525 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:00:55.862770 kubelet[2525]: I0129 11:00:55.862759 2525 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:00:55.865230 kubelet[2525]: I0129 11:00:55.865200 2525 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:00:55.865656 kubelet[2525]: I0129 11:00:55.865523 2525 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:00:55.865713 kubelet[2525]: I0129 11:00:55.865552 2525 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:00:55.865780 kubelet[2525]: I0129 11:00:55.865717 2525 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:00:55.865780 kubelet[2525]: I0129 11:00:55.865725 2525 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:00:55.865780 kubelet[2525]: I0129 11:00:55.865769 2525 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:00:55.866104 kubelet[2525]: I0129 11:00:55.865895 2525 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:00:55.866104 kubelet[2525]: I0129 11:00:55.865908 2525 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:00:55.866104 kubelet[2525]: I0129 11:00:55.865925 2525 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:00:55.866104 kubelet[2525]: I0129 11:00:55.865935 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:00:55.869107 kubelet[2525]: I0129 11:00:55.869061 2525 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:00:55.869606 kubelet[2525]: I0129 11:00:55.869581 2525 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:00:55.870025 kubelet[2525]: I0129 11:00:55.869998 2525 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:00:55.870071 kubelet[2525]: I0129 11:00:55.870034 2525 server.go:1287] "Started kubelet" Jan 29 11:00:55.871176 kubelet[2525]: I0129 11:00:55.871115 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:00:55.871385 kubelet[2525]: I0129 11:00:55.871365 2525 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:00:55.871448 kubelet[2525]: I0129 11:00:55.871426 2525 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:00:55.872147 kubelet[2525]: I0129 11:00:55.872123 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:00:55.872494 kubelet[2525]: I0129 11:00:55.872465 2525 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:00:55.876364 kubelet[2525]: I0129 11:00:55.876121 2525 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:00:55.880719 kubelet[2525]: I0129 11:00:55.877814 2525 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:00:55.880719 kubelet[2525]: I0129 11:00:55.877992 2525 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:00:55.880719 kubelet[2525]: E0129 11:00:55.878023 2525 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:00:55.880719 kubelet[2525]: I0129 11:00:55.878187 2525 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:00:55.880719 kubelet[2525]: E0129 11:00:55.878223 2525 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:00:55.880719 kubelet[2525]: I0129 11:00:55.880268 2525 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:00:55.880719 kubelet[2525]: I0129 11:00:55.880475 2525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:00:55.883927 kubelet[2525]: I0129 11:00:55.883897 2525 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:00:55.890878 kubelet[2525]: I0129 11:00:55.890757 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:00:55.891985 kubelet[2525]: I0129 11:00:55.891707 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:00:55.891985 kubelet[2525]: I0129 11:00:55.891726 2525 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:00:55.891985 kubelet[2525]: I0129 11:00:55.891746 2525 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:00:55.891985 kubelet[2525]: I0129 11:00:55.891753 2525 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:00:55.891985 kubelet[2525]: E0129 11:00:55.891790 2525 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:00:55.930054 kubelet[2525]: I0129 11:00:55.930024 2525 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:00:55.930054 kubelet[2525]: I0129 11:00:55.930045 2525 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:00:55.930054 kubelet[2525]: I0129 11:00:55.930066 2525 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:00:55.930290 kubelet[2525]: I0129 11:00:55.930271 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:00:55.930328 kubelet[2525]: I0129 11:00:55.930287 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:00:55.930328 kubelet[2525]: I0129 11:00:55.930316 2525 policy_none.go:49] "None policy: Start" Jan 29 11:00:55.930328 kubelet[2525]: I0129 11:00:55.930326 2525 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:00:55.930390 kubelet[2525]: I0129 11:00:55.930336 2525 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:00:55.930453 kubelet[2525]: I0129 11:00:55.930441 2525 state_mem.go:75] "Updated machine memory state" Jan 29 11:00:55.934246 kubelet[2525]: I0129 11:00:55.934212 2525 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:00:55.934428 kubelet[2525]: I0129 11:00:55.934404 2525 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:00:55.934476 kubelet[2525]: I0129 11:00:55.934422 2525 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:00:55.935357 kubelet[2525]: I0129 11:00:55.935055 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:00:55.935835 kubelet[2525]: E0129 11:00:55.935814 2525 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:00:55.993409 kubelet[2525]: I0129 11:00:55.993254 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:55.993409 kubelet[2525]: I0129 11:00:55.993342 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:55.994728 kubelet[2525]: I0129 11:00:55.994485 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:55.999887 kubelet[2525]: E0129 11:00:55.999840 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:56.000024 kubelet[2525]: E0129 11:00:55.999840 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:56.000380 kubelet[2525]: E0129 11:00:56.000250 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:56.039108 kubelet[2525]: I0129 11:00:56.039068 2525 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:00:56.046429 kubelet[2525]: I0129 11:00:56.046383 2525 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 11:00:56.046596 kubelet[2525]: I0129 11:00:56.046472 2525 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:00:56.079719 kubelet[2525]: I0129 11:00:56.079670 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:56.079719 kubelet[2525]: I0129 11:00:56.079710 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:56.079719 kubelet[2525]: I0129 11:00:56.079728 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85455f2c1c23e8ca9a683e484234e730-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85455f2c1c23e8ca9a683e484234e730\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:56.079865 kubelet[2525]: I0129 11:00:56.079748 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85455f2c1c23e8ca9a683e484234e730-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85455f2c1c23e8ca9a683e484234e730\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:56.079865 kubelet[2525]: I0129 11:00:56.079796 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:56.079865 kubelet[2525]: I0129 11:00:56.079814 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:56.079865 kubelet[2525]: I0129 11:00:56.079829 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:56.079865 kubelet[2525]: I0129 11:00:56.079850 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:00:56.079997 kubelet[2525]: I0129 11:00:56.079865 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85455f2c1c23e8ca9a683e484234e730-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85455f2c1c23e8ca9a683e484234e730\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:56.301180 kubelet[2525]: E0129 11:00:56.301054 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:56.301462 kubelet[2525]: E0129 11:00:56.301318 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:56.301498 kubelet[2525]: E0129 11:00:56.301466 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:56.868661 kubelet[2525]: I0129 11:00:56.868532 2525 apiserver.go:52] "Watching apiserver" Jan 29 11:00:56.878800 kubelet[2525]: I0129 11:00:56.878767 2525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:00:56.911585 kubelet[2525]: I0129 11:00:56.911368 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:56.911585 kubelet[2525]: I0129 11:00:56.911518 2525 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:56.911757 kubelet[2525]: E0129 11:00:56.911738 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:56.916466 kubelet[2525]: E0129 11:00:56.916391 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:00:56.916561 kubelet[2525]: E0129 11:00:56.916530 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:56.916790 kubelet[2525]: E0129 11:00:56.916391 2525 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:00:56.916790 kubelet[2525]: E0129 11:00:56.916738 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:56.932675 kubelet[2525]: I0129 11:00:56.932578 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.932563596 podStartE2EDuration="2.932563596s" podCreationTimestamp="2025-01-29 11:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:00:56.931219664 +0000 UTC m=+1.117845289" watchObservedRunningTime="2025-01-29 11:00:56.932563596 +0000 UTC m=+1.119189221" Jan 29 11:00:56.938992 kubelet[2525]: I0129 11:00:56.938945 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.938930698 podStartE2EDuration="2.938930698s" podCreationTimestamp="2025-01-29 11:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:00:56.938923619 +0000 UTC m=+1.125549244" watchObservedRunningTime="2025-01-29 11:00:56.938930698 +0000 UTC m=+1.125556283" Jan 29 11:00:56.945898 kubelet[2525]: I0129 11:00:56.945861 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.945849393 podStartE2EDuration="2.945849393s" podCreationTimestamp="2025-01-29 11:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:00:56.945715024 +0000 UTC m=+1.132340689" watchObservedRunningTime="2025-01-29 11:00:56.945849393 +0000 UTC m=+1.132475018" Jan 29 11:00:57.912743 kubelet[2525]: E0129 11:00:57.912714 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:57.913238 kubelet[2525]: E0129 11:00:57.912818 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:00:58.914278 kubelet[2525]: E0129 11:00:58.914242 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:01.086136 kubelet[2525]: I0129 11:01:01.086108 2525 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:01:01.086942 containerd[1433]: time="2025-01-29T11:01:01.086833601Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:01:01.087308 kubelet[2525]: I0129 11:01:01.087008 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:01:01.092100 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 29 11:01:01.095859 sshd[1613]: Connection closed by 10.0.0.1 port 36866 Jan 29 11:01:01.096436 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:01.103503 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:36866.service: Deactivated successfully. Jan 29 11:01:01.105623 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:01:01.105829 systemd[1]: session-7.scope: Consumed 6.786s CPU time, 153.9M memory peak, 0B memory swap peak. Jan 29 11:01:01.106537 systemd-logind[1416]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:01:01.109132 systemd-logind[1416]: Removed session 7. Jan 29 11:01:01.572485 kubelet[2525]: E0129 11:01:01.572333 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:01.665516 systemd[1]: Created slice kubepods-besteffort-pod606c12f4_c56e_442c_a06c_f5567af228df.slice - libcontainer container kubepods-besteffort-pod606c12f4_c56e_442c_a06c_f5567af228df.slice. Jan 29 11:01:01.718947 kubelet[2525]: I0129 11:01:01.718891 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/606c12f4-c56e-442c-a06c-f5567af228df-lib-modules\") pod \"kube-proxy-5r2vt\" (UID: \"606c12f4-c56e-442c-a06c-f5567af228df\") " pod="kube-system/kube-proxy-5r2vt" Jan 29 11:01:01.719122 kubelet[2525]: I0129 11:01:01.718963 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/606c12f4-c56e-442c-a06c-f5567af228df-xtables-lock\") pod \"kube-proxy-5r2vt\" (UID: \"606c12f4-c56e-442c-a06c-f5567af228df\") " pod="kube-system/kube-proxy-5r2vt" Jan 29 11:01:01.719122 kubelet[2525]: I0129 11:01:01.718981 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/606c12f4-c56e-442c-a06c-f5567af228df-kube-proxy\") pod \"kube-proxy-5r2vt\" (UID: \"606c12f4-c56e-442c-a06c-f5567af228df\") " pod="kube-system/kube-proxy-5r2vt" Jan 29 11:01:01.719122 kubelet[2525]: I0129 11:01:01.718999 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9v7b\" (UniqueName: \"kubernetes.io/projected/606c12f4-c56e-442c-a06c-f5567af228df-kube-api-access-m9v7b\") pod \"kube-proxy-5r2vt\" (UID: \"606c12f4-c56e-442c-a06c-f5567af228df\") " pod="kube-system/kube-proxy-5r2vt" Jan 29 11:01:01.828170 kubelet[2525]: E0129 11:01:01.828042 2525 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 11:01:01.828170 kubelet[2525]: E0129 11:01:01.828102 2525 projected.go:194] Error preparing data for projected volume kube-api-access-m9v7b for pod kube-system/kube-proxy-5r2vt: configmap "kube-root-ca.crt" not found Jan 29 11:01:01.828170 kubelet[2525]: E0129 11:01:01.828170 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/606c12f4-c56e-442c-a06c-f5567af228df-kube-api-access-m9v7b podName:606c12f4-c56e-442c-a06c-f5567af228df nodeName:}" failed. No retries permitted until 2025-01-29 11:01:02.328142743 +0000 UTC m=+6.514768368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m9v7b" (UniqueName: "kubernetes.io/projected/606c12f4-c56e-442c-a06c-f5567af228df-kube-api-access-m9v7b") pod "kube-proxy-5r2vt" (UID: "606c12f4-c56e-442c-a06c-f5567af228df") : configmap "kube-root-ca.crt" not found Jan 29 11:01:01.919220 kubelet[2525]: E0129 11:01:01.919180 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:02.122071 kubelet[2525]: I0129 11:01:02.122037 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbgg\" (UniqueName: \"kubernetes.io/projected/b522cd2f-d1f0-475c-a2d1-6558ce206dcd-kube-api-access-jfbgg\") pod \"tigera-operator-7d68577dc5-2p85w\" (UID: \"b522cd2f-d1f0-475c-a2d1-6558ce206dcd\") " pod="tigera-operator/tigera-operator-7d68577dc5-2p85w" Jan 29 11:01:02.122527 kubelet[2525]: I0129 11:01:02.122479 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b522cd2f-d1f0-475c-a2d1-6558ce206dcd-var-lib-calico\") pod \"tigera-operator-7d68577dc5-2p85w\" (UID: \"b522cd2f-d1f0-475c-a2d1-6558ce206dcd\") " pod="tigera-operator/tigera-operator-7d68577dc5-2p85w" Jan 29 11:01:02.129764 systemd[1]: Created slice kubepods-besteffort-podb522cd2f_d1f0_475c_a2d1_6558ce206dcd.slice - libcontainer container kubepods-besteffort-podb522cd2f_d1f0_475c_a2d1_6558ce206dcd.slice. Jan 29 11:01:02.432876 containerd[1433]: time="2025-01-29T11:01:02.432766345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-2p85w,Uid:b522cd2f-d1f0-475c-a2d1-6558ce206dcd,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:01:02.451889 containerd[1433]: time="2025-01-29T11:01:02.451683643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:02.451889 containerd[1433]: time="2025-01-29T11:01:02.451744034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:02.451889 containerd[1433]: time="2025-01-29T11:01:02.451754912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:02.451889 containerd[1433]: time="2025-01-29T11:01:02.451834820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:02.473304 systemd[1]: Started cri-containerd-295c54264aaaa5205d4d80f4dc056decb4c10222cf6ea53c9924197f5fbc5030.scope - libcontainer container 295c54264aaaa5205d4d80f4dc056decb4c10222cf6ea53c9924197f5fbc5030. Jan 29 11:01:02.498306 containerd[1433]: time="2025-01-29T11:01:02.498269998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-2p85w,Uid:b522cd2f-d1f0-475c-a2d1-6558ce206dcd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"295c54264aaaa5205d4d80f4dc056decb4c10222cf6ea53c9924197f5fbc5030\"" Jan 29 11:01:02.507707 containerd[1433]: time="2025-01-29T11:01:02.507666137Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:01:02.576787 kubelet[2525]: E0129 11:01:02.576512 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:02.577100 containerd[1433]: time="2025-01-29T11:01:02.577029789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r2vt,Uid:606c12f4-c56e-442c-a06c-f5567af228df,Namespace:kube-system,Attempt:0,}" Jan 29 11:01:02.595634 containerd[1433]: time="2025-01-29T11:01:02.595541230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:02.595634 containerd[1433]: time="2025-01-29T11:01:02.595610779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:02.595817 containerd[1433]: time="2025-01-29T11:01:02.595628337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:02.595984 containerd[1433]: time="2025-01-29T11:01:02.595937888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:02.616622 kubelet[2525]: E0129 11:01:02.616567 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:02.618549 systemd[1]: Started cri-containerd-5921bce6c41383f503f2f6afe4f3bbc8bb1639d18d066a10693cf0814282d48f.scope - libcontainer container 5921bce6c41383f503f2f6afe4f3bbc8bb1639d18d066a10693cf0814282d48f. Jan 29 11:01:02.640635 containerd[1433]: time="2025-01-29T11:01:02.640600022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r2vt,Uid:606c12f4-c56e-442c-a06c-f5567af228df,Namespace:kube-system,Attempt:0,} returns sandbox id \"5921bce6c41383f503f2f6afe4f3bbc8bb1639d18d066a10693cf0814282d48f\"" Jan 29 11:01:02.648380 kubelet[2525]: E0129 11:01:02.648355 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:02.653385 containerd[1433]: time="2025-01-29T11:01:02.653250255Z" level=info msg="CreateContainer within sandbox \"5921bce6c41383f503f2f6afe4f3bbc8bb1639d18d066a10693cf0814282d48f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:01:02.664018 containerd[1433]: time="2025-01-29T11:01:02.663974867Z" level=info msg="CreateContainer within sandbox \"5921bce6c41383f503f2f6afe4f3bbc8bb1639d18d066a10693cf0814282d48f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"739079d7c97394f5ec8126776924b84dfb766a2404ea0836f2da6b94ad9415de\"" Jan 29 11:01:02.665126 containerd[1433]: time="2025-01-29T11:01:02.664689516Z" level=info msg="StartContainer for \"739079d7c97394f5ec8126776924b84dfb766a2404ea0836f2da6b94ad9415de\"" Jan 29 11:01:02.689249 systemd[1]: Started cri-containerd-739079d7c97394f5ec8126776924b84dfb766a2404ea0836f2da6b94ad9415de.scope - libcontainer container 739079d7c97394f5ec8126776924b84dfb766a2404ea0836f2da6b94ad9415de. Jan 29 11:01:02.718138 containerd[1433]: time="2025-01-29T11:01:02.718071414Z" level=info msg="StartContainer for \"739079d7c97394f5ec8126776924b84dfb766a2404ea0836f2da6b94ad9415de\" returns successfully" Jan 29 11:01:02.923755 kubelet[2525]: E0129 11:01:02.923655 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:02.925110 kubelet[2525]: E0129 11:01:02.924610 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:02.945724 kubelet[2525]: I0129 11:01:02.945602 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5r2vt" podStartSLOduration=1.94558375 podStartE2EDuration="1.94558375s" podCreationTimestamp="2025-01-29 11:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:01:02.936186212 +0000 UTC m=+7.122811877" watchObservedRunningTime="2025-01-29 11:01:02.94558375 +0000 UTC m=+7.132209375" Jan 29 11:01:03.506260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910859920.mount: Deactivated successfully. Jan 29 11:01:03.753970 containerd[1433]: time="2025-01-29T11:01:03.753916915Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:03.754517 containerd[1433]: time="2025-01-29T11:01:03.754472993Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 11:01:03.755394 containerd[1433]: time="2025-01-29T11:01:03.755349346Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:03.758060 containerd[1433]: time="2025-01-29T11:01:03.757952246Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:03.759269 containerd[1433]: time="2025-01-29T11:01:03.759235019Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.251526289s" Jan 29 11:01:03.759332 containerd[1433]: time="2025-01-29T11:01:03.759269014Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 11:01:03.762250 containerd[1433]: time="2025-01-29T11:01:03.762144715Z" level=info msg="CreateContainer within sandbox \"295c54264aaaa5205d4d80f4dc056decb4c10222cf6ea53c9924197f5fbc5030\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:01:03.773280 containerd[1433]: time="2025-01-29T11:01:03.773232418Z" level=info msg="CreateContainer within sandbox \"295c54264aaaa5205d4d80f4dc056decb4c10222cf6ea53c9924197f5fbc5030\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e052eeb02ac1e3a8714f3bfe429f01230c85dee76af934ab07b5d0c6a88f5db8\"" Jan 29 11:01:03.773687 containerd[1433]: time="2025-01-29T11:01:03.773666675Z" level=info msg="StartContainer for \"e052eeb02ac1e3a8714f3bfe429f01230c85dee76af934ab07b5d0c6a88f5db8\"" Jan 29 11:01:03.823256 systemd[1]: Started cri-containerd-e052eeb02ac1e3a8714f3bfe429f01230c85dee76af934ab07b5d0c6a88f5db8.scope - libcontainer container e052eeb02ac1e3a8714f3bfe429f01230c85dee76af934ab07b5d0c6a88f5db8. Jan 29 11:01:03.856571 containerd[1433]: time="2025-01-29T11:01:03.856526034Z" level=info msg="StartContainer for \"e052eeb02ac1e3a8714f3bfe429f01230c85dee76af934ab07b5d0c6a88f5db8\" returns successfully" Jan 29 11:01:03.938597 kubelet[2525]: I0129 11:01:03.938541 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-2p85w" podStartSLOduration=0.684738844 podStartE2EDuration="1.938520879s" podCreationTimestamp="2025-01-29 11:01:02 +0000 UTC" firstStartedPulling="2025-01-29 11:01:02.507178133 +0000 UTC m=+6.693803758" lastFinishedPulling="2025-01-29 11:01:03.760960168 +0000 UTC m=+7.947585793" observedRunningTime="2025-01-29 11:01:03.938147733 +0000 UTC m=+8.124773358" watchObservedRunningTime="2025-01-29 11:01:03.938520879 +0000 UTC m=+8.125146504" Jan 29 11:01:06.437890 kubelet[2525]: E0129 11:01:06.437857 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:06.935289 kubelet[2525]: E0129 11:01:06.935255 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:07.062685 update_engine[1422]: I20250129 11:01:07.062103 1422 update_attempter.cc:509] Updating boot flags... Jan 29 11:01:07.094139 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2920) Jan 29 11:01:07.148881 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2920) Jan 29 11:01:07.849650 systemd[1]: Created slice kubepods-besteffort-pod41506287_46b7_4930_9651_a6c1ea32c704.slice - libcontainer container kubepods-besteffort-pod41506287_46b7_4930_9651_a6c1ea32c704.slice. Jan 29 11:01:07.868298 systemd[1]: Created slice kubepods-besteffort-podbf748902_33b3_4d18_a904_bc9618ba24fa.slice - libcontainer container kubepods-besteffort-podbf748902_33b3_4d18_a904_bc9618ba24fa.slice. Jan 29 11:01:07.964066 kubelet[2525]: I0129 11:01:07.963125 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41506287-46b7-4930-9651-a6c1ea32c704-tigera-ca-bundle\") pod \"calico-typha-69c9cbd4f4-5hlwg\" (UID: \"41506287-46b7-4930-9651-a6c1ea32c704\") " pod="calico-system/calico-typha-69c9cbd4f4-5hlwg" Jan 29 11:01:07.964787 kubelet[2525]: I0129 11:01:07.964472 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41506287-46b7-4930-9651-a6c1ea32c704-typha-certs\") pod \"calico-typha-69c9cbd4f4-5hlwg\" (UID: \"41506287-46b7-4930-9651-a6c1ea32c704\") " pod="calico-system/calico-typha-69c9cbd4f4-5hlwg" Jan 29 11:01:07.964787 kubelet[2525]: I0129 11:01:07.964504 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-var-run-calico\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964787 kubelet[2525]: I0129 11:01:07.964519 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-flexvol-driver-host\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964787 kubelet[2525]: I0129 11:01:07.964540 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-lib-modules\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964787 kubelet[2525]: I0129 11:01:07.964554 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-cni-bin-dir\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964938 kubelet[2525]: I0129 11:01:07.964568 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-cni-log-dir\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964938 kubelet[2525]: I0129 11:01:07.964584 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bf748902-33b3-4d18-a904-bc9618ba24fa-node-certs\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964938 kubelet[2525]: I0129 11:01:07.964599 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-var-lib-calico\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964938 kubelet[2525]: I0129 11:01:07.964614 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf748902-33b3-4d18-a904-bc9618ba24fa-tigera-ca-bundle\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.964938 kubelet[2525]: I0129 11:01:07.964630 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz95t\" (UniqueName: \"kubernetes.io/projected/41506287-46b7-4930-9651-a6c1ea32c704-kube-api-access-xz95t\") pod \"calico-typha-69c9cbd4f4-5hlwg\" (UID: \"41506287-46b7-4930-9651-a6c1ea32c704\") " pod="calico-system/calico-typha-69c9cbd4f4-5hlwg" Jan 29 11:01:07.965039 kubelet[2525]: I0129 11:01:07.964644 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-cni-net-dir\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.965039 kubelet[2525]: I0129 11:01:07.964659 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkp2s\" (UniqueName: \"kubernetes.io/projected/bf748902-33b3-4d18-a904-bc9618ba24fa-kube-api-access-vkp2s\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.965039 kubelet[2525]: I0129 11:01:07.964676 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-xtables-lock\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.965039 kubelet[2525]: I0129 11:01:07.964694 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bf748902-33b3-4d18-a904-bc9618ba24fa-policysync\") pod \"calico-node-7qwp5\" (UID: \"bf748902-33b3-4d18-a904-bc9618ba24fa\") " pod="calico-system/calico-node-7qwp5" Jan 29 11:01:07.967147 kubelet[2525]: E0129 11:01:07.967105 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:08.065572 kubelet[2525]: I0129 11:01:08.065522 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7fb477ff-983f-4d5c-ba2e-5632face2710-varrun\") pod \"csi-node-driver-jfpvg\" (UID: \"7fb477ff-983f-4d5c-ba2e-5632face2710\") " pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:08.065572 kubelet[2525]: I0129 11:01:08.065561 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fb477ff-983f-4d5c-ba2e-5632face2710-registration-dir\") pod \"csi-node-driver-jfpvg\" (UID: \"7fb477ff-983f-4d5c-ba2e-5632face2710\") " pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:08.065789 kubelet[2525]: I0129 11:01:08.065676 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fb477ff-983f-4d5c-ba2e-5632face2710-kubelet-dir\") pod \"csi-node-driver-jfpvg\" (UID: \"7fb477ff-983f-4d5c-ba2e-5632face2710\") " pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:08.065789 kubelet[2525]: I0129 11:01:08.065705 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fb477ff-983f-4d5c-ba2e-5632face2710-socket-dir\") pod \"csi-node-driver-jfpvg\" (UID: \"7fb477ff-983f-4d5c-ba2e-5632face2710\") " pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:08.065789 kubelet[2525]: I0129 11:01:08.065751 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc7s\" (UniqueName: \"kubernetes.io/projected/7fb477ff-983f-4d5c-ba2e-5632face2710-kube-api-access-ghc7s\") pod \"csi-node-driver-jfpvg\" (UID: \"7fb477ff-983f-4d5c-ba2e-5632face2710\") " pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:08.066530 kubelet[2525]: E0129 11:01:08.066496 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.066530 kubelet[2525]: W0129 11:01:08.066516 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.066530 kubelet[2525]: E0129 11:01:08.066533 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.066879 kubelet[2525]: E0129 11:01:08.066778 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.066879 kubelet[2525]: W0129 11:01:08.066791 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.066879 kubelet[2525]: E0129 11:01:08.066801 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.067396 kubelet[2525]: E0129 11:01:08.067365 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.067487 kubelet[2525]: W0129 11:01:08.067472 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.068358 kubelet[2525]: E0129 11:01:08.067546 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.068581 kubelet[2525]: E0129 11:01:08.068189 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.068581 kubelet[2525]: W0129 11:01:08.068488 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.068581 kubelet[2525]: E0129 11:01:08.068525 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.071092 kubelet[2525]: E0129 11:01:08.071063 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.071181 kubelet[2525]: W0129 11:01:08.071167 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.071319 kubelet[2525]: E0129 11:01:08.071284 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.071546 kubelet[2525]: E0129 11:01:08.071433 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.071546 kubelet[2525]: W0129 11:01:08.071451 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.071546 kubelet[2525]: E0129 11:01:08.071479 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.071788 kubelet[2525]: E0129 11:01:08.071774 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.071852 kubelet[2525]: W0129 11:01:08.071841 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.072093 kubelet[2525]: E0129 11:01:08.072062 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.072234 kubelet[2525]: W0129 11:01:08.072162 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.072441 kubelet[2525]: E0129 11:01:08.072427 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.072582 kubelet[2525]: W0129 11:01:08.072506 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.072962 kubelet[2525]: E0129 11:01:08.072829 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.072962 kubelet[2525]: W0129 11:01:08.072841 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.073053 kubelet[2525]: E0129 11:01:08.073031 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.073120 kubelet[2525]: E0129 11:01:08.073053 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.073120 kubelet[2525]: E0129 11:01:08.073061 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.073120 kubelet[2525]: E0129 11:01:08.073068 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.073367 kubelet[2525]: E0129 11:01:08.073353 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.073747 kubelet[2525]: W0129 11:01:08.073729 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.074064 kubelet[2525]: E0129 11:01:08.073984 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.074064 kubelet[2525]: W0129 11:01:08.073996 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.074212 kubelet[2525]: E0129 11:01:08.074150 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.074212 kubelet[2525]: E0129 11:01:08.074167 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.074314 kubelet[2525]: E0129 11:01:08.074301 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.074369 kubelet[2525]: W0129 11:01:08.074357 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.074574 kubelet[2525]: E0129 11:01:08.074485 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.074773 kubelet[2525]: E0129 11:01:08.074688 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.074773 kubelet[2525]: W0129 11:01:08.074701 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.074773 kubelet[2525]: E0129 11:01:08.074729 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.074972 kubelet[2525]: E0129 11:01:08.074959 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.075117 kubelet[2525]: W0129 11:01:08.075022 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.075227 kubelet[2525]: E0129 11:01:08.075059 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.075357 kubelet[2525]: E0129 11:01:08.075343 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.075422 kubelet[2525]: W0129 11:01:08.075411 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.075773 kubelet[2525]: E0129 11:01:08.075693 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.075773 kubelet[2525]: W0129 11:01:08.075706 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.075958 kubelet[2525]: E0129 11:01:08.075946 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.076105 kubelet[2525]: W0129 11:01:08.076011 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.076105 kubelet[2525]: E0129 11:01:08.076050 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.076181 kubelet[2525]: E0129 11:01:08.076149 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.076181 kubelet[2525]: E0129 11:01:08.076176 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.076439 kubelet[2525]: E0129 11:01:08.076326 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.076439 kubelet[2525]: W0129 11:01:08.076339 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.076439 kubelet[2525]: E0129 11:01:08.076357 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.076644 kubelet[2525]: E0129 11:01:08.076632 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.076809 kubelet[2525]: W0129 11:01:08.076695 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.076809 kubelet[2525]: E0129 11:01:08.076716 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.077183 kubelet[2525]: E0129 11:01:08.076929 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.077183 kubelet[2525]: W0129 11:01:08.076940 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.077183 kubelet[2525]: E0129 11:01:08.076956 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.077501 kubelet[2525]: E0129 11:01:08.077466 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.077501 kubelet[2525]: W0129 11:01:08.077487 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.077561 kubelet[2525]: E0129 11:01:08.077509 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.077984 kubelet[2525]: E0129 11:01:08.077963 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.078104 kubelet[2525]: W0129 11:01:08.078087 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.078185 kubelet[2525]: E0129 11:01:08.078164 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.078452 kubelet[2525]: E0129 11:01:08.078433 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.078452 kubelet[2525]: W0129 11:01:08.078450 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.078525 kubelet[2525]: E0129 11:01:08.078488 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.078670 kubelet[2525]: E0129 11:01:08.078656 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.078670 kubelet[2525]: W0129 11:01:08.078669 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.078768 kubelet[2525]: E0129 11:01:08.078750 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.078915 kubelet[2525]: E0129 11:01:08.078894 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.078915 kubelet[2525]: W0129 11:01:08.078911 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.079016 kubelet[2525]: E0129 11:01:08.078992 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.082386 kubelet[2525]: E0129 11:01:08.082357 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.082447 kubelet[2525]: W0129 11:01:08.082382 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.082590 kubelet[2525]: E0129 11:01:08.082438 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.082628 kubelet[2525]: E0129 11:01:08.082614 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.082628 kubelet[2525]: W0129 11:01:08.082624 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.082719 kubelet[2525]: E0129 11:01:08.082703 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.082902 kubelet[2525]: E0129 11:01:08.082867 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.082902 kubelet[2525]: W0129 11:01:08.082884 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.083001 kubelet[2525]: E0129 11:01:08.082986 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.083117 kubelet[2525]: E0129 11:01:08.083105 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.083117 kubelet[2525]: W0129 11:01:08.083116 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.083179 kubelet[2525]: E0129 11:01:08.083141 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.083303 kubelet[2525]: E0129 11:01:08.083292 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.083303 kubelet[2525]: W0129 11:01:08.083303 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.083420 kubelet[2525]: E0129 11:01:08.083381 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.083472 kubelet[2525]: E0129 11:01:08.083457 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.083472 kubelet[2525]: W0129 11:01:08.083470 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.083567 kubelet[2525]: E0129 11:01:08.083542 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.084095 kubelet[2525]: E0129 11:01:08.084060 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.084144 kubelet[2525]: W0129 11:01:08.084106 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.084144 kubelet[2525]: E0129 11:01:08.084139 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.084319 kubelet[2525]: E0129 11:01:08.084298 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.084319 kubelet[2525]: W0129 11:01:08.084306 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.084415 kubelet[2525]: E0129 11:01:08.084399 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.084567 kubelet[2525]: E0129 11:01:08.084555 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.084567 kubelet[2525]: W0129 11:01:08.084566 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.084705 kubelet[2525]: E0129 11:01:08.084650 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.084839 kubelet[2525]: E0129 11:01:08.084827 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.084839 kubelet[2525]: W0129 11:01:08.084838 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.084979 kubelet[2525]: E0129 11:01:08.084922 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.085134 kubelet[2525]: E0129 11:01:08.085122 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.085134 kubelet[2525]: W0129 11:01:08.085134 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.085229 kubelet[2525]: E0129 11:01:08.085163 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.085370 kubelet[2525]: E0129 11:01:08.085360 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.085370 kubelet[2525]: W0129 11:01:08.085370 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.085485 kubelet[2525]: E0129 11:01:08.085410 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.085627 kubelet[2525]: E0129 11:01:08.085609 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.085627 kubelet[2525]: W0129 11:01:08.085619 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.085711 kubelet[2525]: E0129 11:01:08.085655 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.085846 kubelet[2525]: E0129 11:01:08.085830 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.085846 kubelet[2525]: W0129 11:01:08.085837 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.085999 kubelet[2525]: E0129 11:01:08.085869 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.085999 kubelet[2525]: E0129 11:01:08.085994 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.086051 kubelet[2525]: W0129 11:01:08.086001 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.086051 kubelet[2525]: E0129 11:01:08.086010 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.087987 kubelet[2525]: E0129 11:01:08.087957 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.087987 kubelet[2525]: W0129 11:01:08.087976 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.087987 kubelet[2525]: E0129 11:01:08.087993 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.088232 kubelet[2525]: E0129 11:01:08.088176 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.088232 kubelet[2525]: W0129 11:01:08.088190 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.088232 kubelet[2525]: E0129 11:01:08.088199 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.090861 kubelet[2525]: E0129 11:01:08.090833 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.090861 kubelet[2525]: W0129 11:01:08.090855 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.090953 kubelet[2525]: E0129 11:01:08.090870 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.160038 kubelet[2525]: E0129 11:01:08.159919 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:08.163743 containerd[1433]: time="2025-01-29T11:01:08.163706726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c9cbd4f4-5hlwg,Uid:41506287-46b7-4930-9651-a6c1ea32c704,Namespace:calico-system,Attempt:0,}" Jan 29 11:01:08.166149 kubelet[2525]: E0129 11:01:08.166125 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.166149 kubelet[2525]: W0129 11:01:08.166146 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.166259 kubelet[2525]: E0129 11:01:08.166163 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.166391 kubelet[2525]: E0129 11:01:08.166378 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.166424 kubelet[2525]: W0129 11:01:08.166392 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.166424 kubelet[2525]: E0129 11:01:08.166408 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.166599 kubelet[2525]: E0129 11:01:08.166588 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.166599 kubelet[2525]: W0129 11:01:08.166598 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.166657 kubelet[2525]: E0129 11:01:08.166611 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.166768 kubelet[2525]: E0129 11:01:08.166758 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.166768 kubelet[2525]: W0129 11:01:08.166768 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.166821 kubelet[2525]: E0129 11:01:08.166780 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.166941 kubelet[2525]: E0129 11:01:08.166931 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.166941 kubelet[2525]: W0129 11:01:08.166941 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.166993 kubelet[2525]: E0129 11:01:08.166953 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.167129 kubelet[2525]: E0129 11:01:08.167118 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.167129 kubelet[2525]: W0129 11:01:08.167129 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.167239 kubelet[2525]: E0129 11:01:08.167142 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.167294 kubelet[2525]: E0129 11:01:08.167282 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.167294 kubelet[2525]: W0129 11:01:08.167291 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.167339 kubelet[2525]: E0129 11:01:08.167304 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.167450 kubelet[2525]: E0129 11:01:08.167438 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.167450 kubelet[2525]: W0129 11:01:08.167449 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.167516 kubelet[2525]: E0129 11:01:08.167477 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.167607 kubelet[2525]: E0129 11:01:08.167594 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.167607 kubelet[2525]: W0129 11:01:08.167604 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.167733 kubelet[2525]: E0129 11:01:08.167643 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.167760 kubelet[2525]: E0129 11:01:08.167746 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.167760 kubelet[2525]: W0129 11:01:08.167753 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.167814 kubelet[2525]: E0129 11:01:08.167793 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.167888 kubelet[2525]: E0129 11:01:08.167876 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.167888 kubelet[2525]: W0129 11:01:08.167884 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.167965 kubelet[2525]: E0129 11:01:08.167918 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.168024 kubelet[2525]: E0129 11:01:08.168015 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.168024 kubelet[2525]: W0129 11:01:08.168022 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.168090 kubelet[2525]: E0129 11:01:08.168041 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.168210 kubelet[2525]: E0129 11:01:08.168198 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.168210 kubelet[2525]: W0129 11:01:08.168208 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.168264 kubelet[2525]: E0129 11:01:08.168221 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.168470 kubelet[2525]: E0129 11:01:08.168436 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.168510 kubelet[2525]: W0129 11:01:08.168472 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.168510 kubelet[2525]: E0129 11:01:08.168490 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.169325 kubelet[2525]: E0129 11:01:08.169308 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.169325 kubelet[2525]: W0129 11:01:08.169325 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.169406 kubelet[2525]: E0129 11:01:08.169343 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.169584 kubelet[2525]: E0129 11:01:08.169571 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.169584 kubelet[2525]: W0129 11:01:08.169583 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.169687 kubelet[2525]: E0129 11:01:08.169650 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.169764 kubelet[2525]: E0129 11:01:08.169750 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.169764 kubelet[2525]: W0129 11:01:08.169761 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.169851 kubelet[2525]: E0129 11:01:08.169832 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.169982 kubelet[2525]: E0129 11:01:08.169969 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.169982 kubelet[2525]: W0129 11:01:08.169980 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.170051 kubelet[2525]: E0129 11:01:08.170026 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.170158 kubelet[2525]: E0129 11:01:08.170146 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.170158 kubelet[2525]: W0129 11:01:08.170158 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.170218 kubelet[2525]: E0129 11:01:08.170179 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.170326 kubelet[2525]: E0129 11:01:08.170315 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.170326 kubelet[2525]: W0129 11:01:08.170324 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.170383 kubelet[2525]: E0129 11:01:08.170348 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.170669 kubelet[2525]: E0129 11:01:08.170652 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:08.170953 kubelet[2525]: E0129 11:01:08.170926 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.170953 kubelet[2525]: W0129 11:01:08.170949 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.171009 kubelet[2525]: E0129 11:01:08.170965 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.171523 kubelet[2525]: E0129 11:01:08.171197 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.171523 kubelet[2525]: W0129 11:01:08.171211 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.171523 kubelet[2525]: E0129 11:01:08.171226 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.171523 kubelet[2525]: E0129 11:01:08.171427 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.171523 kubelet[2525]: W0129 11:01:08.171437 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.171523 kubelet[2525]: E0129 11:01:08.171481 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.171721 containerd[1433]: time="2025-01-29T11:01:08.171234131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qwp5,Uid:bf748902-33b3-4d18-a904-bc9618ba24fa,Namespace:calico-system,Attempt:0,}" Jan 29 11:01:08.171755 kubelet[2525]: E0129 11:01:08.171586 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.171755 kubelet[2525]: W0129 11:01:08.171594 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.171755 kubelet[2525]: E0129 11:01:08.171603 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.178891 kubelet[2525]: E0129 11:01:08.178856 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.178891 kubelet[2525]: W0129 11:01:08.178878 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.178891 kubelet[2525]: E0129 11:01:08.178895 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.207860 kubelet[2525]: E0129 11:01:08.207744 2525 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:01:08.207860 kubelet[2525]: W0129 11:01:08.207785 2525 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:01:08.207860 kubelet[2525]: E0129 11:01:08.207807 2525 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:01:08.229628 containerd[1433]: time="2025-01-29T11:01:08.229108780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:08.229628 containerd[1433]: time="2025-01-29T11:01:08.229170814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:08.229628 containerd[1433]: time="2025-01-29T11:01:08.229186292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:08.229628 containerd[1433]: time="2025-01-29T11:01:08.229281122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:08.234932 containerd[1433]: time="2025-01-29T11:01:08.234804699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:08.234932 containerd[1433]: time="2025-01-29T11:01:08.234855414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:08.234932 containerd[1433]: time="2025-01-29T11:01:08.234866613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:08.235136 containerd[1433]: time="2025-01-29T11:01:08.234996279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:08.249236 systemd[1]: Started cri-containerd-2163ad498e4445a7671ab6ad057ec3b0e6d2fc36e7331cf970c1079e0402ad21.scope - libcontainer container 2163ad498e4445a7671ab6ad057ec3b0e6d2fc36e7331cf970c1079e0402ad21. Jan 29 11:01:08.254782 systemd[1]: Started cri-containerd-6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3.scope - libcontainer container 6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3. Jan 29 11:01:08.286508 containerd[1433]: time="2025-01-29T11:01:08.286468844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c9cbd4f4-5hlwg,Uid:41506287-46b7-4930-9651-a6c1ea32c704,Namespace:calico-system,Attempt:0,} returns sandbox id \"2163ad498e4445a7671ab6ad057ec3b0e6d2fc36e7331cf970c1079e0402ad21\"" Jan 29 11:01:08.287589 containerd[1433]: time="2025-01-29T11:01:08.287518013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qwp5,Uid:bf748902-33b3-4d18-a904-bc9618ba24fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\"" Jan 29 11:01:08.295101 kubelet[2525]: E0129 11:01:08.294820 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:08.296729 containerd[1433]: time="2025-01-29T11:01:08.296696204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:01:08.297567 kubelet[2525]: E0129 11:01:08.297535 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:09.302353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740756948.mount: Deactivated successfully. Jan 29 11:01:09.360024 containerd[1433]: time="2025-01-29T11:01:09.359971987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:09.360884 containerd[1433]: time="2025-01-29T11:01:09.360813383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 11:01:09.361191 containerd[1433]: time="2025-01-29T11:01:09.361163149Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:09.363128 containerd[1433]: time="2025-01-29T11:01:09.363096077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:09.364048 containerd[1433]: time="2025-01-29T11:01:09.364005707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.067273427s" Jan 29 11:01:09.364100 containerd[1433]: time="2025-01-29T11:01:09.364047103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 11:01:09.365519 containerd[1433]: time="2025-01-29T11:01:09.365297339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:01:09.372929 containerd[1433]: time="2025-01-29T11:01:09.372876829Z" level=info msg="CreateContainer within sandbox \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:01:09.392555 containerd[1433]: time="2025-01-29T11:01:09.392499087Z" level=info msg="CreateContainer within sandbox \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48\"" Jan 29 11:01:09.394115 containerd[1433]: time="2025-01-29T11:01:09.393187659Z" level=info msg="StartContainer for \"ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48\"" Jan 29 11:01:09.428301 systemd[1]: Started cri-containerd-ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48.scope - libcontainer container ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48. Jan 29 11:01:09.457817 containerd[1433]: time="2025-01-29T11:01:09.457775065Z" level=info msg="StartContainer for \"ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48\" returns successfully" Jan 29 11:01:09.493448 systemd[1]: cri-containerd-ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48.scope: Deactivated successfully. Jan 29 11:01:09.537387 containerd[1433]: time="2025-01-29T11:01:09.532062472Z" level=info msg="shim disconnected" id=ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48 namespace=k8s.io Jan 29 11:01:09.537387 containerd[1433]: time="2025-01-29T11:01:09.537380586Z" level=warning msg="cleaning up after shim disconnected" id=ccc0fb3bdba0aabc9baea151a430296455b4952aff00589082ab65c2556e5b48 namespace=k8s.io Jan 29 11:01:09.537387 containerd[1433]: time="2025-01-29T11:01:09.537393424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:01:09.892409 kubelet[2525]: E0129 11:01:09.892364 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:09.949471 kubelet[2525]: E0129 11:01:09.949279 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:11.391190 containerd[1433]: time="2025-01-29T11:01:11.391145673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:11.392489 containerd[1433]: time="2025-01-29T11:01:11.392447920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 29 11:01:11.393500 containerd[1433]: time="2025-01-29T11:01:11.393456712Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:11.396545 containerd[1433]: time="2025-01-29T11:01:11.396216992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:11.397230 containerd[1433]: time="2025-01-29T11:01:11.397127432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.031786177s" Jan 29 11:01:11.397230 containerd[1433]: time="2025-01-29T11:01:11.397182108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 11:01:11.400457 containerd[1433]: time="2025-01-29T11:01:11.400420066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:01:11.412466 containerd[1433]: time="2025-01-29T11:01:11.412352948Z" level=info msg="CreateContainer within sandbox \"2163ad498e4445a7671ab6ad057ec3b0e6d2fc36e7331cf970c1079e0402ad21\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:01:11.423285 containerd[1433]: time="2025-01-29T11:01:11.423254159Z" level=info msg="CreateContainer within sandbox \"2163ad498e4445a7671ab6ad057ec3b0e6d2fc36e7331cf970c1079e0402ad21\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1908a12d513dbefbdb7fd34f508d18d375c9c23db16b31525c04927b8dfdb42b\"" Jan 29 11:01:11.424761 containerd[1433]: time="2025-01-29T11:01:11.424424538Z" level=info msg="StartContainer for \"1908a12d513dbefbdb7fd34f508d18d375c9c23db16b31525c04927b8dfdb42b\"" Jan 29 11:01:11.457642 systemd[1]: Started cri-containerd-1908a12d513dbefbdb7fd34f508d18d375c9c23db16b31525c04927b8dfdb42b.scope - libcontainer container 1908a12d513dbefbdb7fd34f508d18d375c9c23db16b31525c04927b8dfdb42b. Jan 29 11:01:11.493376 containerd[1433]: time="2025-01-29T11:01:11.493329023Z" level=info msg="StartContainer for \"1908a12d513dbefbdb7fd34f508d18d375c9c23db16b31525c04927b8dfdb42b\" returns successfully" Jan 29 11:01:11.892522 kubelet[2525]: E0129 11:01:11.892191 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:11.963769 kubelet[2525]: E0129 11:01:11.963699 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:12.967521 kubelet[2525]: I0129 11:01:12.967480 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:01:12.968072 kubelet[2525]: E0129 11:01:12.967906 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:13.893429 kubelet[2525]: E0129 11:01:13.893390 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:15.530267 containerd[1433]: time="2025-01-29T11:01:15.530214945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:15.531311 containerd[1433]: time="2025-01-29T11:01:15.531055528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 11:01:15.531952 containerd[1433]: time="2025-01-29T11:01:15.531861514Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:15.534174 containerd[1433]: time="2025-01-29T11:01:15.534138881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:15.534857 containerd[1433]: time="2025-01-29T11:01:15.534822715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.13428002s" Jan 29 11:01:15.534906 containerd[1433]: time="2025-01-29T11:01:15.534856033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 11:01:15.537239 containerd[1433]: time="2025-01-29T11:01:15.537198475Z" level=info msg="CreateContainer within sandbox \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:01:15.549738 containerd[1433]: time="2025-01-29T11:01:15.549646799Z" level=info msg="CreateContainer within sandbox \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109\"" Jan 29 11:01:15.550197 containerd[1433]: time="2025-01-29T11:01:15.550170084Z" level=info msg="StartContainer for \"4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109\"" Jan 29 11:01:15.580283 systemd[1]: Started cri-containerd-4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109.scope - libcontainer container 4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109. Jan 29 11:01:15.602726 containerd[1433]: time="2025-01-29T11:01:15.602648077Z" level=info msg="StartContainer for \"4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109\" returns successfully" Jan 29 11:01:15.898457 kubelet[2525]: E0129 11:01:15.897097 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:15.972235 kubelet[2525]: E0129 11:01:15.972114 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:15.999781 kubelet[2525]: I0129 11:01:15.999432 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69c9cbd4f4-5hlwg" podStartSLOduration=5.899567139 podStartE2EDuration="8.999416213s" podCreationTimestamp="2025-01-29 11:01:07 +0000 UTC" firstStartedPulling="2025-01-29 11:01:08.298616002 +0000 UTC m=+12.485241587" lastFinishedPulling="2025-01-29 11:01:11.398465036 +0000 UTC m=+15.585090661" observedRunningTime="2025-01-29 11:01:11.980148111 +0000 UTC m=+16.166773736" watchObservedRunningTime="2025-01-29 11:01:15.999416213 +0000 UTC m=+20.186041838" Jan 29 11:01:16.173044 systemd[1]: cri-containerd-4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109.scope: Deactivated successfully. Jan 29 11:01:16.195687 kubelet[2525]: I0129 11:01:16.194773 2525 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:01:16.196203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109-rootfs.mount: Deactivated successfully. Jan 29 11:01:16.290451 containerd[1433]: time="2025-01-29T11:01:16.290114216Z" level=info msg="shim disconnected" id=4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109 namespace=k8s.io Jan 29 11:01:16.290451 containerd[1433]: time="2025-01-29T11:01:16.290172532Z" level=warning msg="cleaning up after shim disconnected" id=4fd7b59d72b7318ef7a90729783b9fc1bf25c78590265603d0643bab35da4109 namespace=k8s.io Jan 29 11:01:16.297167 containerd[1433]: time="2025-01-29T11:01:16.297113055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:01:16.314589 systemd[1]: Created slice kubepods-burstable-podf3a930f1_28d8_4b84_b302_2ee738d83501.slice - libcontainer container kubepods-burstable-podf3a930f1_28d8_4b84_b302_2ee738d83501.slice. Jan 29 11:01:16.328519 systemd[1]: Created slice kubepods-burstable-podfe5a11eb_34f8_4ac2_b56a_f4cc11926f6b.slice - libcontainer container kubepods-burstable-podfe5a11eb_34f8_4ac2_b56a_f4cc11926f6b.slice. Jan 29 11:01:16.335131 systemd[1]: Created slice kubepods-besteffort-pode2a43c5c_ca07_4add_b171_f2255f364fd9.slice - libcontainer container kubepods-besteffort-pode2a43c5c_ca07_4add_b171_f2255f364fd9.slice. Jan 29 11:01:16.343400 systemd[1]: Created slice kubepods-besteffort-pod5810df49_717b_4ac3_90ec_8888521cc6d3.slice - libcontainer container kubepods-besteffort-pod5810df49_717b_4ac3_90ec_8888521cc6d3.slice. Jan 29 11:01:16.350410 systemd[1]: Created slice kubepods-besteffort-podbc8d959a_7f2e_4e77_bbe7_a5e311c45518.slice - libcontainer container kubepods-besteffort-podbc8d959a_7f2e_4e77_bbe7_a5e311c45518.slice. Jan 29 11:01:16.437023 kubelet[2525]: I0129 11:01:16.436902 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmz9\" (UniqueName: \"kubernetes.io/projected/fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b-kube-api-access-jwmz9\") pod \"coredns-668d6bf9bc-t46bf\" (UID: \"fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b\") " pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:16.437242 kubelet[2525]: I0129 11:01:16.437215 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5810df49-717b-4ac3-90ec-8888521cc6d3-calico-apiserver-certs\") pod \"calico-apiserver-548cc7dcc6-5f4ww\" (UID: \"5810df49-717b-4ac3-90ec-8888521cc6d3\") " pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:16.437334 kubelet[2525]: I0129 11:01:16.437321 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a43c5c-ca07-4add-b171-f2255f364fd9-tigera-ca-bundle\") pod \"calico-kube-controllers-5d55b65567-pc9f6\" (UID: \"e2a43c5c-ca07-4add-b171-f2255f364fd9\") " pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:16.437413 kubelet[2525]: I0129 11:01:16.437400 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3a930f1-28d8-4b84-b302-2ee738d83501-config-volume\") pod \"coredns-668d6bf9bc-6rzzz\" (UID: \"f3a930f1-28d8-4b84-b302-2ee738d83501\") " pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:16.437498 kubelet[2525]: I0129 11:01:16.437475 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtgh\" (UniqueName: \"kubernetes.io/projected/f3a930f1-28d8-4b84-b302-2ee738d83501-kube-api-access-phtgh\") pod \"coredns-668d6bf9bc-6rzzz\" (UID: \"f3a930f1-28d8-4b84-b302-2ee738d83501\") " pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:16.437687 kubelet[2525]: I0129 11:01:16.437567 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbqj\" (UniqueName: \"kubernetes.io/projected/bc8d959a-7f2e-4e77-bbe7-a5e311c45518-kube-api-access-cqbqj\") pod \"calico-apiserver-548cc7dcc6-ddf92\" (UID: \"bc8d959a-7f2e-4e77-bbe7-a5e311c45518\") " pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:16.437687 kubelet[2525]: I0129 11:01:16.437594 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc8d959a-7f2e-4e77-bbe7-a5e311c45518-calico-apiserver-certs\") pod \"calico-apiserver-548cc7dcc6-ddf92\" (UID: \"bc8d959a-7f2e-4e77-bbe7-a5e311c45518\") " pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:16.437687 kubelet[2525]: I0129 11:01:16.437613 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp49m\" (UniqueName: \"kubernetes.io/projected/5810df49-717b-4ac3-90ec-8888521cc6d3-kube-api-access-mp49m\") pod \"calico-apiserver-548cc7dcc6-5f4ww\" (UID: \"5810df49-717b-4ac3-90ec-8888521cc6d3\") " pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:16.437687 kubelet[2525]: I0129 11:01:16.437631 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b-config-volume\") pod \"coredns-668d6bf9bc-t46bf\" (UID: \"fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b\") " pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:16.437687 kubelet[2525]: I0129 11:01:16.437655 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5m7\" (UniqueName: \"kubernetes.io/projected/e2a43c5c-ca07-4add-b171-f2255f364fd9-kube-api-access-4m5m7\") pod \"calico-kube-controllers-5d55b65567-pc9f6\" (UID: \"e2a43c5c-ca07-4add-b171-f2255f364fd9\") " pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:16.617632 kubelet[2525]: E0129 11:01:16.617599 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:16.619369 containerd[1433]: time="2025-01-29T11:01:16.619287397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:0,}" Jan 29 11:01:16.634025 kubelet[2525]: E0129 11:01:16.633997 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:16.634771 containerd[1433]: time="2025-01-29T11:01:16.634469881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:0,}" Jan 29 11:01:16.639599 containerd[1433]: time="2025-01-29T11:01:16.639402050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:0,}" Jan 29 11:01:16.654138 containerd[1433]: time="2025-01-29T11:01:16.651328899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:01:16.658390 containerd[1433]: time="2025-01-29T11:01:16.658167708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:01:16.998914 kubelet[2525]: E0129 11:01:16.997752 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:17.000807 containerd[1433]: time="2025-01-29T11:01:17.000719129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:01:17.038503 containerd[1433]: time="2025-01-29T11:01:17.038441861Z" level=error msg="Failed to destroy network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.038832 containerd[1433]: time="2025-01-29T11:01:17.038525616Z" level=error msg="Failed to destroy network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.043841 containerd[1433]: time="2025-01-29T11:01:17.043766866Z" level=error msg="encountered an error cleaning up failed sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.045534 containerd[1433]: time="2025-01-29T11:01:17.045439767Z" level=error msg="Failed to destroy network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.046053 containerd[1433]: time="2025-01-29T11:01:17.045910900Z" level=error msg="encountered an error cleaning up failed sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.046053 containerd[1433]: time="2025-01-29T11:01:17.046000254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.046310 containerd[1433]: time="2025-01-29T11:01:17.046150165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.046310 containerd[1433]: time="2025-01-29T11:01:17.046184643Z" level=error msg="Failed to destroy network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.046310 containerd[1433]: time="2025-01-29T11:01:17.046149725Z" level=error msg="encountered an error cleaning up failed sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.046310 containerd[1433]: time="2025-01-29T11:01:17.046254919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.047556 containerd[1433]: time="2025-01-29T11:01:17.046649736Z" level=error msg="encountered an error cleaning up failed sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.047556 containerd[1433]: time="2025-01-29T11:01:17.046712492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.047808 containerd[1433]: time="2025-01-29T11:01:17.047773470Z" level=error msg="Failed to destroy network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.048233 containerd[1433]: time="2025-01-29T11:01:17.048192445Z" level=error msg="encountered an error cleaning up failed sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.048371 containerd[1433]: time="2025-01-29T11:01:17.048349396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.048835 kubelet[2525]: E0129 11:01:17.048795 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.048909 kubelet[2525]: E0129 11:01:17.048848 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.048909 kubelet[2525]: E0129 11:01:17.048881 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.048966 kubelet[2525]: E0129 11:01:17.048803 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.049961 kubelet[2525]: E0129 11:01:17.049936 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:17.050026 kubelet[2525]: E0129 11:01:17.049970 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:17.050026 kubelet[2525]: E0129 11:01:17.048804 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.050133 kubelet[2525]: E0129 11:01:17.050042 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:17.050133 kubelet[2525]: E0129 11:01:17.050058 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:17.050215 kubelet[2525]: E0129 11:01:17.050138 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" podUID="bc8d959a-7f2e-4e77-bbe7-a5e311c45518" Jan 29 11:01:17.050215 kubelet[2525]: E0129 11:01:17.050011 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t46bf" podUID="fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b" Jan 29 11:01:17.050215 kubelet[2525]: E0129 11:01:17.049936 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:17.050389 kubelet[2525]: E0129 11:01:17.050186 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:17.050389 kubelet[2525]: E0129 11:01:17.050221 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" podUID="e2a43c5c-ca07-4add-b171-f2255f364fd9" Jan 29 11:01:17.051927 kubelet[2525]: E0129 11:01:17.051875 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:17.051997 kubelet[2525]: E0129 11:01:17.051929 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:17.051997 kubelet[2525]: E0129 11:01:17.051981 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rzzz" podUID="f3a930f1-28d8-4b84-b302-2ee738d83501" Jan 29 11:01:17.052583 kubelet[2525]: E0129 11:01:17.052545 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:17.052639 kubelet[2525]: E0129 11:01:17.052592 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:17.052664 kubelet[2525]: E0129 11:01:17.052632 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" podUID="5810df49-717b-4ac3-90ec-8888521cc6d3" Jan 29 11:01:17.547233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8-shm.mount: Deactivated successfully. Jan 29 11:01:17.898567 systemd[1]: Created slice kubepods-besteffort-pod7fb477ff_983f_4d5c_ba2e_5632face2710.slice - libcontainer container kubepods-besteffort-pod7fb477ff_983f_4d5c_ba2e_5632face2710.slice. Jan 29 11:01:17.900687 containerd[1433]: time="2025-01-29T11:01:17.900630856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:0,}" Jan 29 11:01:17.976415 containerd[1433]: time="2025-01-29T11:01:17.976321226Z" level=error msg="Failed to destroy network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.978205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9-shm.mount: Deactivated successfully. Jan 29 11:01:17.979688 containerd[1433]: time="2025-01-29T11:01:17.978583692Z" level=error msg="encountered an error cleaning up failed sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.979688 containerd[1433]: time="2025-01-29T11:01:17.978928192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.980696 kubelet[2525]: E0129 11:01:17.980248 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:17.980696 kubelet[2525]: E0129 11:01:17.980319 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:17.980696 kubelet[2525]: E0129 11:01:17.980340 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:17.980886 kubelet[2525]: E0129 11:01:17.980381 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:18.000525 kubelet[2525]: I0129 11:01:18.000494 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc" Jan 29 11:01:18.001433 containerd[1433]: time="2025-01-29T11:01:18.001396666Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:18.003103 kubelet[2525]: I0129 11:01:18.003021 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf" Jan 29 11:01:18.003851 containerd[1433]: time="2025-01-29T11:01:18.003772614Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:18.004025 containerd[1433]: time="2025-01-29T11:01:18.003999602Z" level=info msg="Ensure that sandbox 3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf in task-service has been cleanup successfully" Jan 29 11:01:18.006225 kubelet[2525]: I0129 11:01:18.006194 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8" Jan 29 11:01:18.007257 systemd[1]: run-netns-cni\x2d08793b5a\x2d5a54\x2d2654\x2dae0e\x2d7256b164f7a4.mount: Deactivated successfully. Jan 29 11:01:18.008489 containerd[1433]: time="2025-01-29T11:01:18.007531246Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:18.008489 containerd[1433]: time="2025-01-29T11:01:18.007706316Z" level=info msg="Ensure that sandbox aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8 in task-service has been cleanup successfully" Jan 29 11:01:18.008489 containerd[1433]: time="2025-01-29T11:01:18.008427716Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:18.008489 containerd[1433]: time="2025-01-29T11:01:18.008451755Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:18.008846 kubelet[2525]: E0129 11:01:18.008795 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:18.009420 containerd[1433]: time="2025-01-29T11:01:18.009399463Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:18.009495 kubelet[2525]: I0129 11:01:18.009479 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9" Jan 29 11:01:18.009598 containerd[1433]: time="2025-01-29T11:01:18.009580013Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:18.010411 containerd[1433]: time="2025-01-29T11:01:18.009488058Z" level=info msg="Ensure that sandbox 6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc in task-service has been cleanup successfully" Jan 29 11:01:18.010411 containerd[1433]: time="2025-01-29T11:01:18.010394047Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:18.011007 containerd[1433]: time="2025-01-29T11:01:18.010537720Z" level=info msg="Ensure that sandbox 5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9 in task-service has been cleanup successfully" Jan 29 11:01:18.011007 containerd[1433]: time="2025-01-29T11:01:18.009405022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:1,}" Jan 29 11:01:18.011129 kubelet[2525]: E0129 11:01:18.010872 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:18.011196 containerd[1433]: time="2025-01-29T11:01:18.011165405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:1,}" Jan 29 11:01:18.011261 containerd[1433]: time="2025-01-29T11:01:18.011227681Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:18.013213 containerd[1433]: time="2025-01-29T11:01:18.012292422Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:18.013213 containerd[1433]: time="2025-01-29T11:01:18.013159454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:1,}" Jan 29 11:01:18.015653 kubelet[2525]: I0129 11:01:18.015615 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca" Jan 29 11:01:18.018673 kubelet[2525]: I0129 11:01:18.018645 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9" Jan 29 11:01:18.019082 containerd[1433]: time="2025-01-29T11:01:18.019040929Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:18.019227 containerd[1433]: time="2025-01-29T11:01:18.019213159Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:18.019272 containerd[1433]: time="2025-01-29T11:01:18.019150283Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:18.019840 containerd[1433]: time="2025-01-29T11:01:18.019355951Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:18.019920 containerd[1433]: time="2025-01-29T11:01:18.019772848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:01:18.020289 containerd[1433]: time="2025-01-29T11:01:18.020213024Z" level=info msg="Ensure that sandbox 0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca in task-service has been cleanup successfully" Jan 29 11:01:18.020319 containerd[1433]: time="2025-01-29T11:01:18.020290020Z" level=info msg="Ensure that sandbox 14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9 in task-service has been cleanup successfully" Jan 29 11:01:18.021561 containerd[1433]: time="2025-01-29T11:01:18.021523551Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:18.021561 containerd[1433]: time="2025-01-29T11:01:18.021550870Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:18.024324 containerd[1433]: time="2025-01-29T11:01:18.023832703Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:18.024324 containerd[1433]: time="2025-01-29T11:01:18.023869701Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:18.025350 containerd[1433]: time="2025-01-29T11:01:18.025325621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:1,}" Jan 29 11:01:18.029849 containerd[1433]: time="2025-01-29T11:01:18.029809532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:01:18.162992 containerd[1433]: time="2025-01-29T11:01:18.162791329Z" level=error msg="Failed to destroy network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.163226 containerd[1433]: time="2025-01-29T11:01:18.163197986Z" level=error msg="Failed to destroy network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.163610 containerd[1433]: time="2025-01-29T11:01:18.163424054Z" level=error msg="encountered an error cleaning up failed sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.163610 containerd[1433]: time="2025-01-29T11:01:18.163559446Z" level=error msg="encountered an error cleaning up failed sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.165504 containerd[1433]: time="2025-01-29T11:01:18.165334588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.166209 kubelet[2525]: E0129 11:01:18.165967 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.166274 kubelet[2525]: E0129 11:01:18.166226 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:18.166274 kubelet[2525]: E0129 11:01:18.166252 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:18.166318 kubelet[2525]: E0129 11:01:18.166289 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:18.166615 containerd[1433]: time="2025-01-29T11:01:18.166578399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.167590 kubelet[2525]: E0129 11:01:18.167543 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.167676 kubelet[2525]: E0129 11:01:18.167599 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:18.167703 kubelet[2525]: E0129 11:01:18.167677 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:18.167754 kubelet[2525]: E0129 11:01:18.167712 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t46bf" podUID="fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b" Jan 29 11:01:18.184025 containerd[1433]: time="2025-01-29T11:01:18.183837363Z" level=error msg="Failed to destroy network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.184860 containerd[1433]: time="2025-01-29T11:01:18.184810950Z" level=error msg="encountered an error cleaning up failed sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.184917 containerd[1433]: time="2025-01-29T11:01:18.184889825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.185267 kubelet[2525]: E0129 11:01:18.185221 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.185330 kubelet[2525]: E0129 11:01:18.185284 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:18.185330 kubelet[2525]: E0129 11:01:18.185304 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:18.185395 kubelet[2525]: E0129 11:01:18.185341 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rzzz" podUID="f3a930f1-28d8-4b84-b302-2ee738d83501" Jan 29 11:01:18.188661 containerd[1433]: time="2025-01-29T11:01:18.188542903Z" level=error msg="Failed to destroy network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.190349 containerd[1433]: time="2025-01-29T11:01:18.190267807Z" level=error msg="encountered an error cleaning up failed sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.190460 containerd[1433]: time="2025-01-29T11:01:18.190371442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.190997 kubelet[2525]: E0129 11:01:18.190672 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.190997 kubelet[2525]: E0129 11:01:18.190735 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:18.190997 kubelet[2525]: E0129 11:01:18.190755 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:18.191199 kubelet[2525]: E0129 11:01:18.190796 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" podUID="e2a43c5c-ca07-4add-b171-f2255f364fd9" Jan 29 11:01:18.191477 containerd[1433]: time="2025-01-29T11:01:18.191416864Z" level=error msg="Failed to destroy network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.192383 containerd[1433]: time="2025-01-29T11:01:18.192350612Z" level=error msg="encountered an error cleaning up failed sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.192430 containerd[1433]: time="2025-01-29T11:01:18.192406649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.192626 kubelet[2525]: E0129 11:01:18.192593 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.192932 kubelet[2525]: E0129 11:01:18.192894 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:18.192982 kubelet[2525]: E0129 11:01:18.192939 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:18.193037 kubelet[2525]: E0129 11:01:18.193013 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" podUID="bc8d959a-7f2e-4e77-bbe7-a5e311c45518" Jan 29 11:01:18.194262 containerd[1433]: time="2025-01-29T11:01:18.194230308Z" level=error msg="Failed to destroy network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.194563 containerd[1433]: time="2025-01-29T11:01:18.194538971Z" level=error msg="encountered an error cleaning up failed sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.194730 containerd[1433]: time="2025-01-29T11:01:18.194688603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.195272 kubelet[2525]: E0129 11:01:18.195241 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:18.195309 kubelet[2525]: E0129 11:01:18.195290 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:18.195336 kubelet[2525]: E0129 11:01:18.195309 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:18.195371 kubelet[2525]: E0129 11:01:18.195344 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" podUID="5810df49-717b-4ac3-90ec-8888521cc6d3" Jan 29 11:01:18.547615 systemd[1]: run-netns-cni\x2d7b459cb5\x2d1036\x2d0160\x2da500\x2dec97f06b567a.mount: Deactivated successfully. Jan 29 11:01:18.547702 systemd[1]: run-netns-cni\x2d929519d9\x2dff98\x2d8627\x2d50c1\x2d7b8097094acb.mount: Deactivated successfully. Jan 29 11:01:18.547750 systemd[1]: run-netns-cni\x2dd83f3055\x2de7fb\x2d584d\x2d1ae5\x2dd551a0f51576.mount: Deactivated successfully. Jan 29 11:01:18.547803 systemd[1]: run-netns-cni\x2d2f4eaae2\x2d97c9\x2de1e8\x2d1c86\x2dd0fb8a6367fc.mount: Deactivated successfully. Jan 29 11:01:18.547849 systemd[1]: run-netns-cni\x2d5e44f97b\x2d9a91\x2d6918\x2d890f\x2d65d994ab7447.mount: Deactivated successfully. Jan 29 11:01:19.021644 kubelet[2525]: I0129 11:01:19.021605 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527" Jan 29 11:01:19.022406 containerd[1433]: time="2025-01-29T11:01:19.022363806Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:19.022636 containerd[1433]: time="2025-01-29T11:01:19.022537157Z" level=info msg="Ensure that sandbox 6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527 in task-service has been cleanup successfully" Jan 29 11:01:19.023448 containerd[1433]: time="2025-01-29T11:01:19.022722667Z" level=info msg="TearDown network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" successfully" Jan 29 11:01:19.023448 containerd[1433]: time="2025-01-29T11:01:19.022739986Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" returns successfully" Jan 29 11:01:19.023448 containerd[1433]: time="2025-01-29T11:01:19.023333116Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:19.023448 containerd[1433]: time="2025-01-29T11:01:19.023415751Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:19.023448 containerd[1433]: time="2025-01-29T11:01:19.023429510Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:19.023961 containerd[1433]: time="2025-01-29T11:01:19.023925525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:2,}" Jan 29 11:01:19.024713 systemd[1]: run-netns-cni\x2d4d17bad4\x2df629\x2dbd5a\x2d19e6\x2dc5fdf52e7521.mount: Deactivated successfully. Jan 29 11:01:19.026326 kubelet[2525]: I0129 11:01:19.025860 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3" Jan 29 11:01:19.026453 containerd[1433]: time="2025-01-29T11:01:19.026421395Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:19.026677 containerd[1433]: time="2025-01-29T11:01:19.026643824Z" level=info msg="Ensure that sandbox 2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3 in task-service has been cleanup successfully" Jan 29 11:01:19.028115 containerd[1433]: time="2025-01-29T11:01:19.026944688Z" level=info msg="TearDown network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" successfully" Jan 29 11:01:19.028115 containerd[1433]: time="2025-01-29T11:01:19.026965327Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" returns successfully" Jan 29 11:01:19.029703 systemd[1]: run-netns-cni\x2da153341f\x2dc1f8\x2d3631\x2d0a0f\x2d255635c490ad.mount: Deactivated successfully. Jan 29 11:01:19.031119 containerd[1433]: time="2025-01-29T11:01:19.030861885Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:19.031119 containerd[1433]: time="2025-01-29T11:01:19.030951840Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:19.031119 containerd[1433]: time="2025-01-29T11:01:19.030961000Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:19.031304 kubelet[2525]: I0129 11:01:19.031265 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825" Jan 29 11:01:19.031827 containerd[1433]: time="2025-01-29T11:01:19.031797036Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:19.031939 containerd[1433]: time="2025-01-29T11:01:19.031916790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:2,}" Jan 29 11:01:19.033763 kubelet[2525]: I0129 11:01:19.033462 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b" Jan 29 11:01:19.034972 containerd[1433]: time="2025-01-29T11:01:19.034317505Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:19.034972 containerd[1433]: time="2025-01-29T11:01:19.034752003Z" level=info msg="Ensure that sandbox b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b in task-service has been cleanup successfully" Jan 29 11:01:19.035207 containerd[1433]: time="2025-01-29T11:01:19.035125143Z" level=info msg="TearDown network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" successfully" Jan 29 11:01:19.035245 containerd[1433]: time="2025-01-29T11:01:19.035206659Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" returns successfully" Jan 29 11:01:19.035671 containerd[1433]: time="2025-01-29T11:01:19.035600599Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:19.035735 containerd[1433]: time="2025-01-29T11:01:19.035686354Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:19.035735 containerd[1433]: time="2025-01-29T11:01:19.035697514Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:19.036957 systemd[1]: run-netns-cni\x2d81039ee8\x2d8cd4\x2d629b\x2de169\x2dadd4473e21e6.mount: Deactivated successfully. Jan 29 11:01:19.037407 kubelet[2525]: E0129 11:01:19.037384 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:19.037985 containerd[1433]: time="2025-01-29T11:01:19.037922438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:2,}" Jan 29 11:01:19.038068 kubelet[2525]: I0129 11:01:19.038045 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe" Jan 29 11:01:19.038462 containerd[1433]: time="2025-01-29T11:01:19.038439651Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:19.038707 containerd[1433]: time="2025-01-29T11:01:19.038588004Z" level=info msg="Ensure that sandbox b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe in task-service has been cleanup successfully" Jan 29 11:01:19.038783 containerd[1433]: time="2025-01-29T11:01:19.038762755Z" level=info msg="TearDown network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" successfully" Jan 29 11:01:19.038812 containerd[1433]: time="2025-01-29T11:01:19.038781314Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" returns successfully" Jan 29 11:01:19.039863 kubelet[2525]: I0129 11:01:19.039839 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d" Jan 29 11:01:19.040272 systemd[1]: run-netns-cni\x2d22d61647\x2d82cb\x2df759\x2d69c5\x2d7423c7978dab.mount: Deactivated successfully. Jan 29 11:01:19.040387 containerd[1433]: time="2025-01-29T11:01:19.040358592Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:19.040905 containerd[1433]: time="2025-01-29T11:01:19.040485545Z" level=info msg="Ensure that sandbox 01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d in task-service has been cleanup successfully" Jan 29 11:01:19.040905 containerd[1433]: time="2025-01-29T11:01:19.040642777Z" level=info msg="TearDown network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" successfully" Jan 29 11:01:19.040905 containerd[1433]: time="2025-01-29T11:01:19.040657216Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" returns successfully" Jan 29 11:01:19.041036 containerd[1433]: time="2025-01-29T11:01:19.041000958Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:19.041122 containerd[1433]: time="2025-01-29T11:01:19.041072435Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:19.041122 containerd[1433]: time="2025-01-29T11:01:19.041119952Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:19.041528 containerd[1433]: time="2025-01-29T11:01:19.041500892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:01:19.123458 containerd[1433]: time="2025-01-29T11:01:19.123412400Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:19.123583 containerd[1433]: time="2025-01-29T11:01:19.123523074Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:19.123583 containerd[1433]: time="2025-01-29T11:01:19.123534674Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:19.123837 kubelet[2525]: E0129 11:01:19.123810 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:19.125556 containerd[1433]: time="2025-01-29T11:01:19.125519011Z" level=info msg="Ensure that sandbox 053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825 in task-service has been cleanup successfully" Jan 29 11:01:19.125762 containerd[1433]: time="2025-01-29T11:01:19.125720600Z" level=info msg="TearDown network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" successfully" Jan 29 11:01:19.125762 containerd[1433]: time="2025-01-29T11:01:19.125736600Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" returns successfully" Jan 29 11:01:19.125986 containerd[1433]: time="2025-01-29T11:01:19.125955828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:2,}" Jan 29 11:01:19.126490 containerd[1433]: time="2025-01-29T11:01:19.126307130Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:19.126490 containerd[1433]: time="2025-01-29T11:01:19.126384126Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:19.126490 containerd[1433]: time="2025-01-29T11:01:19.126394045Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:19.127099 containerd[1433]: time="2025-01-29T11:01:19.126843902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:01:19.244711 containerd[1433]: time="2025-01-29T11:01:19.244620508Z" level=error msg="Failed to destroy network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.245130 containerd[1433]: time="2025-01-29T11:01:19.245026727Z" level=error msg="encountered an error cleaning up failed sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.245189 containerd[1433]: time="2025-01-29T11:01:19.245156280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.247125 kubelet[2525]: E0129 11:01:19.245417 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.247125 kubelet[2525]: E0129 11:01:19.245473 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:19.247125 kubelet[2525]: E0129 11:01:19.245498 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:19.247285 kubelet[2525]: E0129 11:01:19.245537 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" podUID="e2a43c5c-ca07-4add-b171-f2255f364fd9" Jan 29 11:01:19.256998 containerd[1433]: time="2025-01-29T11:01:19.256855113Z" level=error msg="Failed to destroy network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.258142 containerd[1433]: time="2025-01-29T11:01:19.258101888Z" level=error msg="encountered an error cleaning up failed sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.258234 containerd[1433]: time="2025-01-29T11:01:19.258189684Z" level=error msg="Failed to destroy network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.258289 containerd[1433]: time="2025-01-29T11:01:19.258265920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.258704 kubelet[2525]: E0129 11:01:19.258657 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.258763 kubelet[2525]: E0129 11:01:19.258728 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:19.258763 kubelet[2525]: E0129 11:01:19.258746 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:19.258826 kubelet[2525]: E0129 11:01:19.258781 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:19.259743 containerd[1433]: time="2025-01-29T11:01:19.259286147Z" level=error msg="encountered an error cleaning up failed sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.259743 containerd[1433]: time="2025-01-29T11:01:19.259342704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.259869 kubelet[2525]: E0129 11:01:19.259505 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.259869 kubelet[2525]: E0129 11:01:19.259542 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:19.259869 kubelet[2525]: E0129 11:01:19.259565 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:19.259988 kubelet[2525]: E0129 11:01:19.259603 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rzzz" podUID="f3a930f1-28d8-4b84-b302-2ee738d83501" Jan 29 11:01:19.264356 containerd[1433]: time="2025-01-29T11:01:19.264204012Z" level=error msg="Failed to destroy network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.266499 containerd[1433]: time="2025-01-29T11:01:19.265814248Z" level=error msg="encountered an error cleaning up failed sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.266965 containerd[1433]: time="2025-01-29T11:01:19.266877713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.267172 kubelet[2525]: E0129 11:01:19.267144 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.267414 kubelet[2525]: E0129 11:01:19.267302 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:19.267414 kubelet[2525]: E0129 11:01:19.267328 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:19.267414 kubelet[2525]: E0129 11:01:19.267373 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t46bf" podUID="fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b" Jan 29 11:01:19.269659 containerd[1433]: time="2025-01-29T11:01:19.269180073Z" level=error msg="Failed to destroy network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.269659 containerd[1433]: time="2025-01-29T11:01:19.269532215Z" level=error msg="encountered an error cleaning up failed sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.269659 containerd[1433]: time="2025-01-29T11:01:19.269587532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.270185 kubelet[2525]: E0129 11:01:19.270145 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.270267 kubelet[2525]: E0129 11:01:19.270200 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:19.270267 kubelet[2525]: E0129 11:01:19.270218 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:19.270323 containerd[1433]: time="2025-01-29T11:01:19.270190781Z" level=error msg="Failed to destroy network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.270351 kubelet[2525]: E0129 11:01:19.270256 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" podUID="bc8d959a-7f2e-4e77-bbe7-a5e311c45518" Jan 29 11:01:19.270519 containerd[1433]: time="2025-01-29T11:01:19.270446767Z" level=error msg="encountered an error cleaning up failed sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.271734 containerd[1433]: time="2025-01-29T11:01:19.270933262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.272474 kubelet[2525]: E0129 11:01:19.271962 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:19.273171 kubelet[2525]: E0129 11:01:19.272027 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:19.275058 kubelet[2525]: E0129 11:01:19.272642 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:19.275058 kubelet[2525]: E0129 11:01:19.274856 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" podUID="5810df49-717b-4ac3-90ec-8888521cc6d3" Jan 29 11:01:19.548430 systemd[1]: run-netns-cni\x2de5f392ab\x2d91a0\x2d25cf\x2d9fc6\x2db8eb1b372eb0.mount: Deactivated successfully. Jan 29 11:01:19.548545 systemd[1]: run-netns-cni\x2db852a3f2\x2d97cd\x2d4681\x2d1f12\x2d2e0afe1e1299.mount: Deactivated successfully. Jan 29 11:01:20.042965 kubelet[2525]: I0129 11:01:20.042933 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0" Jan 29 11:01:20.044158 containerd[1433]: time="2025-01-29T11:01:20.043601530Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" Jan 29 11:01:20.044158 containerd[1433]: time="2025-01-29T11:01:20.044001390Z" level=info msg="Ensure that sandbox 18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0 in task-service has been cleanup successfully" Jan 29 11:01:20.044494 containerd[1433]: time="2025-01-29T11:01:20.044365213Z" level=info msg="TearDown network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" successfully" Jan 29 11:01:20.044494 containerd[1433]: time="2025-01-29T11:01:20.044382812Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" returns successfully" Jan 29 11:01:20.046406 containerd[1433]: time="2025-01-29T11:01:20.045411922Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:20.046406 containerd[1433]: time="2025-01-29T11:01:20.045868299Z" level=info msg="TearDown network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" successfully" Jan 29 11:01:20.046406 containerd[1433]: time="2025-01-29T11:01:20.045970814Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" returns successfully" Jan 29 11:01:20.046061 systemd[1]: run-netns-cni\x2d32a6bd87\x2d2c79\x2d11f5\x2dc630\x2d7a1bb421ad06.mount: Deactivated successfully. Jan 29 11:01:20.047656 containerd[1433]: time="2025-01-29T11:01:20.047538778Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:20.047656 containerd[1433]: time="2025-01-29T11:01:20.047647933Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:20.047757 containerd[1433]: time="2025-01-29T11:01:20.047659212Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:20.048227 kubelet[2525]: I0129 11:01:20.048124 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9" Jan 29 11:01:20.061096 containerd[1433]: time="2025-01-29T11:01:20.060648620Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" Jan 29 11:01:20.061096 containerd[1433]: time="2025-01-29T11:01:20.060858850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:3,}" Jan 29 11:01:20.061978 containerd[1433]: time="2025-01-29T11:01:20.060862890Z" level=info msg="Ensure that sandbox c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9 in task-service has been cleanup successfully" Jan 29 11:01:20.062368 containerd[1433]: time="2025-01-29T11:01:20.062336698Z" level=info msg="TearDown network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" successfully" Jan 29 11:01:20.062451 containerd[1433]: time="2025-01-29T11:01:20.062436693Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" returns successfully" Jan 29 11:01:20.063125 containerd[1433]: time="2025-01-29T11:01:20.063099861Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:20.063324 containerd[1433]: time="2025-01-29T11:01:20.063306571Z" level=info msg="TearDown network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" successfully" Jan 29 11:01:20.063394 containerd[1433]: time="2025-01-29T11:01:20.063374047Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" returns successfully" Jan 29 11:01:20.064056 systemd[1]: run-netns-cni\x2d6d35a3da\x2de2dc\x2dc9b1\x2d61ef\x2d0845d0f2a56c.mount: Deactivated successfully. Jan 29 11:01:20.065319 containerd[1433]: time="2025-01-29T11:01:20.065280275Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:20.065401 containerd[1433]: time="2025-01-29T11:01:20.065387190Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:20.065428 containerd[1433]: time="2025-01-29T11:01:20.065398869Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:20.065868 containerd[1433]: time="2025-01-29T11:01:20.065835728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:3,}" Jan 29 11:01:20.066346 kubelet[2525]: I0129 11:01:20.066182 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740" Jan 29 11:01:20.067135 containerd[1433]: time="2025-01-29T11:01:20.067109306Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" Jan 29 11:01:20.067405 containerd[1433]: time="2025-01-29T11:01:20.067380453Z" level=info msg="Ensure that sandbox 7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740 in task-service has been cleanup successfully" Jan 29 11:01:20.068419 containerd[1433]: time="2025-01-29T11:01:20.068319727Z" level=info msg="TearDown network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" successfully" Jan 29 11:01:20.068419 containerd[1433]: time="2025-01-29T11:01:20.068340686Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" returns successfully" Jan 29 11:01:20.069614 systemd[1]: run-netns-cni\x2daf4bad8b\x2dd02d\x2d34fa\x2d20f5\x2da2a8cad4f236.mount: Deactivated successfully. Jan 29 11:01:20.071789 containerd[1433]: time="2025-01-29T11:01:20.070726410Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:20.071789 containerd[1433]: time="2025-01-29T11:01:20.071447575Z" level=info msg="TearDown network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" successfully" Jan 29 11:01:20.071789 containerd[1433]: time="2025-01-29T11:01:20.071463814Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" returns successfully" Jan 29 11:01:20.072020 kubelet[2525]: I0129 11:01:20.071468 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f" Jan 29 11:01:20.073337 containerd[1433]: time="2025-01-29T11:01:20.072212737Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:20.073337 containerd[1433]: time="2025-01-29T11:01:20.072681795Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:20.073337 containerd[1433]: time="2025-01-29T11:01:20.072693674Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:20.073337 containerd[1433]: time="2025-01-29T11:01:20.072981300Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" Jan 29 11:01:20.073337 containerd[1433]: time="2025-01-29T11:01:20.073151772Z" level=info msg="Ensure that sandbox c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f in task-service has been cleanup successfully" Jan 29 11:01:20.073337 containerd[1433]: time="2025-01-29T11:01:20.073324763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:01:20.075112 systemd[1]: run-netns-cni\x2d24cce5a0\x2dd939\x2d05ac\x2df5c5\x2dbf46a786c05e.mount: Deactivated successfully. Jan 29 11:01:20.075288 containerd[1433]: time="2025-01-29T11:01:20.075108676Z" level=info msg="TearDown network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" successfully" Jan 29 11:01:20.075288 containerd[1433]: time="2025-01-29T11:01:20.075260149Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" returns successfully" Jan 29 11:01:20.075669 containerd[1433]: time="2025-01-29T11:01:20.075577174Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:20.075669 containerd[1433]: time="2025-01-29T11:01:20.075668209Z" level=info msg="TearDown network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" successfully" Jan 29 11:01:20.075806 containerd[1433]: time="2025-01-29T11:01:20.075679529Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" returns successfully" Jan 29 11:01:20.076828 containerd[1433]: time="2025-01-29T11:01:20.076652321Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:20.076828 containerd[1433]: time="2025-01-29T11:01:20.076725078Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:20.076828 containerd[1433]: time="2025-01-29T11:01:20.076735557Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:20.076955 kubelet[2525]: I0129 11:01:20.076845 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58" Jan 29 11:01:20.077215 kubelet[2525]: E0129 11:01:20.076999 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:20.077274 containerd[1433]: time="2025-01-29T11:01:20.077210894Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" Jan 29 11:01:20.077525 containerd[1433]: time="2025-01-29T11:01:20.077351927Z" level=info msg="Ensure that sandbox 99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58 in task-service has been cleanup successfully" Jan 29 11:01:20.077995 containerd[1433]: time="2025-01-29T11:01:20.077787546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:3,}" Jan 29 11:01:20.078187 containerd[1433]: time="2025-01-29T11:01:20.078147768Z" level=info msg="TearDown network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" successfully" Jan 29 11:01:20.078187 containerd[1433]: time="2025-01-29T11:01:20.078174727Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" returns successfully" Jan 29 11:01:20.078962 containerd[1433]: time="2025-01-29T11:01:20.078861654Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:20.079203 containerd[1433]: time="2025-01-29T11:01:20.079112722Z" level=info msg="TearDown network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" successfully" Jan 29 11:01:20.079203 containerd[1433]: time="2025-01-29T11:01:20.079135080Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" returns successfully" Jan 29 11:01:20.080171 containerd[1433]: time="2025-01-29T11:01:20.080084394Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:20.080171 containerd[1433]: time="2025-01-29T11:01:20.080171830Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:20.080282 containerd[1433]: time="2025-01-29T11:01:20.080182309Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:20.080834 kubelet[2525]: E0129 11:01:20.080346 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:20.080834 kubelet[2525]: I0129 11:01:20.080486 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828" Jan 29 11:01:20.080927 containerd[1433]: time="2025-01-29T11:01:20.080590890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:3,}" Jan 29 11:01:20.080958 containerd[1433]: time="2025-01-29T11:01:20.080931353Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" Jan 29 11:01:20.081544 containerd[1433]: time="2025-01-29T11:01:20.081470367Z" level=info msg="Ensure that sandbox 6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828 in task-service has been cleanup successfully" Jan 29 11:01:20.081674 containerd[1433]: time="2025-01-29T11:01:20.081652998Z" level=info msg="TearDown network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" successfully" Jan 29 11:01:20.081674 containerd[1433]: time="2025-01-29T11:01:20.081671757Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" returns successfully" Jan 29 11:01:20.082022 containerd[1433]: time="2025-01-29T11:01:20.082001541Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:20.082167 containerd[1433]: time="2025-01-29T11:01:20.082094416Z" level=info msg="TearDown network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" successfully" Jan 29 11:01:20.082167 containerd[1433]: time="2025-01-29T11:01:20.082143494Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" returns successfully" Jan 29 11:01:20.082523 containerd[1433]: time="2025-01-29T11:01:20.082488757Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:20.082937 containerd[1433]: time="2025-01-29T11:01:20.082581913Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:20.082937 containerd[1433]: time="2025-01-29T11:01:20.082596472Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:20.083102 containerd[1433]: time="2025-01-29T11:01:20.083065369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:01:20.353995 containerd[1433]: time="2025-01-29T11:01:20.353950386Z" level=error msg="Failed to destroy network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.355378 containerd[1433]: time="2025-01-29T11:01:20.355342878Z" level=error msg="encountered an error cleaning up failed sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.356229 containerd[1433]: time="2025-01-29T11:01:20.356189797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.356913 kubelet[2525]: E0129 11:01:20.356808 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.357026 kubelet[2525]: E0129 11:01:20.356933 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:20.357026 kubelet[2525]: E0129 11:01:20.356960 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:20.357026 kubelet[2525]: E0129 11:01:20.357005 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" podUID="bc8d959a-7f2e-4e77-bbe7-a5e311c45518" Jan 29 11:01:20.375465 containerd[1433]: time="2025-01-29T11:01:20.375330506Z" level=error msg="Failed to destroy network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.376360 containerd[1433]: time="2025-01-29T11:01:20.376310498Z" level=error msg="encountered an error cleaning up failed sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.376600 containerd[1433]: time="2025-01-29T11:01:20.376483209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.376802 kubelet[2525]: E0129 11:01:20.376767 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.377206 kubelet[2525]: E0129 11:01:20.376895 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:20.377206 kubelet[2525]: E0129 11:01:20.376918 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:20.377206 kubelet[2525]: E0129 11:01:20.376967 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t46bf" podUID="fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b" Jan 29 11:01:20.392323 containerd[1433]: time="2025-01-29T11:01:20.392197765Z" level=error msg="Failed to destroy network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.392978 containerd[1433]: time="2025-01-29T11:01:20.392498670Z" level=error msg="encountered an error cleaning up failed sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.393063 containerd[1433]: time="2025-01-29T11:01:20.393026884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.393297 kubelet[2525]: E0129 11:01:20.393259 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.393354 kubelet[2525]: E0129 11:01:20.393315 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:20.393354 kubelet[2525]: E0129 11:01:20.393334 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:20.393416 kubelet[2525]: E0129 11:01:20.393377 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" podUID="5810df49-717b-4ac3-90ec-8888521cc6d3" Jan 29 11:01:20.405993 containerd[1433]: time="2025-01-29T11:01:20.405944736Z" level=error msg="Failed to destroy network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.406303 containerd[1433]: time="2025-01-29T11:01:20.406267760Z" level=error msg="encountered an error cleaning up failed sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.406357 containerd[1433]: time="2025-01-29T11:01:20.406334837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.406598 kubelet[2525]: E0129 11:01:20.406546 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.406665 kubelet[2525]: E0129 11:01:20.406615 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:20.406665 kubelet[2525]: E0129 11:01:20.406636 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:20.406723 kubelet[2525]: E0129 11:01:20.406677 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" podUID="e2a43c5c-ca07-4add-b171-f2255f364fd9" Jan 29 11:01:20.409227 containerd[1433]: time="2025-01-29T11:01:20.409176738Z" level=error msg="Failed to destroy network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.409631 containerd[1433]: time="2025-01-29T11:01:20.409457125Z" level=error msg="encountered an error cleaning up failed sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.409631 containerd[1433]: time="2025-01-29T11:01:20.409525481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.409861 kubelet[2525]: E0129 11:01:20.409724 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.409861 kubelet[2525]: E0129 11:01:20.409767 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:20.409861 kubelet[2525]: E0129 11:01:20.409799 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:20.409998 kubelet[2525]: E0129 11:01:20.409838 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rzzz" podUID="f3a930f1-28d8-4b84-b302-2ee738d83501" Jan 29 11:01:20.416110 containerd[1433]: time="2025-01-29T11:01:20.415908851Z" level=error msg="Failed to destroy network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.416390 containerd[1433]: time="2025-01-29T11:01:20.416361109Z" level=error msg="encountered an error cleaning up failed sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.417217 containerd[1433]: time="2025-01-29T11:01:20.417131751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.417369 kubelet[2525]: E0129 11:01:20.417336 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:20.417541 kubelet[2525]: E0129 11:01:20.417383 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:20.417541 kubelet[2525]: E0129 11:01:20.417404 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:20.417541 kubelet[2525]: E0129 11:01:20.417441 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:20.548595 systemd[1]: run-netns-cni\x2d26d80c74\x2d2d88\x2da353\x2deede\x2d1e6413e237e9.mount: Deactivated successfully. Jan 29 11:01:20.548685 systemd[1]: run-netns-cni\x2d6476d25e\x2d7d5f\x2de661\x2d3f30\x2d6d3dfe5dbb6b.mount: Deactivated successfully. Jan 29 11:01:20.965549 kubelet[2525]: I0129 11:01:20.965511 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:01:20.966008 kubelet[2525]: E0129 11:01:20.965822 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:21.085783 kubelet[2525]: I0129 11:01:21.085295 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055" Jan 29 11:01:21.086185 containerd[1433]: time="2025-01-29T11:01:21.086028336Z" level=info msg="StopPodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\"" Jan 29 11:01:21.086383 containerd[1433]: time="2025-01-29T11:01:21.086206968Z" level=info msg="Ensure that sandbox 5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055 in task-service has been cleanup successfully" Jan 29 11:01:21.086383 containerd[1433]: time="2025-01-29T11:01:21.086371881Z" level=info msg="TearDown network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" successfully" Jan 29 11:01:21.086438 containerd[1433]: time="2025-01-29T11:01:21.086386680Z" level=info msg="StopPodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" returns successfully" Jan 29 11:01:21.086819 containerd[1433]: time="2025-01-29T11:01:21.086797541Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" Jan 29 11:01:21.086925 containerd[1433]: time="2025-01-29T11:01:21.086906696Z" level=info msg="TearDown network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" successfully" Jan 29 11:01:21.086958 containerd[1433]: time="2025-01-29T11:01:21.086927495Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" returns successfully" Jan 29 11:01:21.088118 systemd[1]: run-netns-cni\x2dceb9ea3d\x2d0842\x2d571d\x2da124\x2d09a290dc84a9.mount: Deactivated successfully. Jan 29 11:01:21.089236 containerd[1433]: time="2025-01-29T11:01:21.089212951Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:21.089460 containerd[1433]: time="2025-01-29T11:01:21.089290708Z" level=info msg="TearDown network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" successfully" Jan 29 11:01:21.089460 containerd[1433]: time="2025-01-29T11:01:21.089301147Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" returns successfully" Jan 29 11:01:21.090246 containerd[1433]: time="2025-01-29T11:01:21.090226025Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:21.090358 containerd[1433]: time="2025-01-29T11:01:21.090339500Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:21.090462 containerd[1433]: time="2025-01-29T11:01:21.090357499Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:21.091029 containerd[1433]: time="2025-01-29T11:01:21.090996910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:4,}" Jan 29 11:01:21.091723 kubelet[2525]: I0129 11:01:21.091688 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1" Jan 29 11:01:21.092544 containerd[1433]: time="2025-01-29T11:01:21.092516680Z" level=info msg="StopPodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\"" Jan 29 11:01:21.092687 containerd[1433]: time="2025-01-29T11:01:21.092663634Z" level=info msg="Ensure that sandbox 7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1 in task-service has been cleanup successfully" Jan 29 11:01:21.093284 containerd[1433]: time="2025-01-29T11:01:21.093256207Z" level=info msg="TearDown network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" successfully" Jan 29 11:01:21.093324 containerd[1433]: time="2025-01-29T11:01:21.093283245Z" level=info msg="StopPodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" returns successfully" Jan 29 11:01:21.095132 systemd[1]: run-netns-cni\x2da7542e48\x2d566f\x2d3fcf\x2d6166\x2de96f24d45bb1.mount: Deactivated successfully. Jan 29 11:01:21.095506 containerd[1433]: time="2025-01-29T11:01:21.095254555Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" Jan 29 11:01:21.095566 containerd[1433]: time="2025-01-29T11:01:21.095504544Z" level=info msg="TearDown network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" successfully" Jan 29 11:01:21.095566 containerd[1433]: time="2025-01-29T11:01:21.095518743Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" returns successfully" Jan 29 11:01:21.096225 containerd[1433]: time="2025-01-29T11:01:21.096196713Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:21.096607 containerd[1433]: time="2025-01-29T11:01:21.096412463Z" level=info msg="TearDown network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" successfully" Jan 29 11:01:21.096607 containerd[1433]: time="2025-01-29T11:01:21.096476260Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" returns successfully" Jan 29 11:01:21.097732 kubelet[2525]: I0129 11:01:21.097705 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac" Jan 29 11:01:21.098916 containerd[1433]: time="2025-01-29T11:01:21.098548085Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:21.098916 containerd[1433]: time="2025-01-29T11:01:21.098547485Z" level=info msg="StopPodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\"" Jan 29 11:01:21.098916 containerd[1433]: time="2025-01-29T11:01:21.098657880Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:21.098916 containerd[1433]: time="2025-01-29T11:01:21.098669320Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:21.098916 containerd[1433]: time="2025-01-29T11:01:21.098766075Z" level=info msg="Ensure that sandbox f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac in task-service has been cleanup successfully" Jan 29 11:01:21.099196 containerd[1433]: time="2025-01-29T11:01:21.099176857Z" level=info msg="TearDown network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" successfully" Jan 29 11:01:21.099299 containerd[1433]: time="2025-01-29T11:01:21.099284532Z" level=info msg="StopPodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" returns successfully" Jan 29 11:01:21.099447 containerd[1433]: time="2025-01-29T11:01:21.099392247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:01:21.101003 systemd[1]: run-netns-cni\x2d02a70365\x2d309d\x2dfe54\x2dcb17\x2d49465caeec72.mount: Deactivated successfully. Jan 29 11:01:21.103037 containerd[1433]: time="2025-01-29T11:01:21.103002082Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" Jan 29 11:01:21.103142 containerd[1433]: time="2025-01-29T11:01:21.103102157Z" level=info msg="TearDown network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" successfully" Jan 29 11:01:21.103142 containerd[1433]: time="2025-01-29T11:01:21.103113237Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" returns successfully" Jan 29 11:01:21.103766 containerd[1433]: time="2025-01-29T11:01:21.103743088Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:21.103977 containerd[1433]: time="2025-01-29T11:01:21.103897721Z" level=info msg="TearDown network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" successfully" Jan 29 11:01:21.103977 containerd[1433]: time="2025-01-29T11:01:21.103914480Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" returns successfully" Jan 29 11:01:21.105675 containerd[1433]: time="2025-01-29T11:01:21.104746082Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:21.105675 containerd[1433]: time="2025-01-29T11:01:21.104835358Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:21.105675 containerd[1433]: time="2025-01-29T11:01:21.104847158Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:21.106015 kubelet[2525]: I0129 11:01:21.105983 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627" Jan 29 11:01:21.106499 containerd[1433]: time="2025-01-29T11:01:21.106331450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:4,}" Jan 29 11:01:21.106869 containerd[1433]: time="2025-01-29T11:01:21.106847547Z" level=info msg="StopPodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\"" Jan 29 11:01:21.107152 containerd[1433]: time="2025-01-29T11:01:21.107129574Z" level=info msg="Ensure that sandbox d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627 in task-service has been cleanup successfully" Jan 29 11:01:21.107617 containerd[1433]: time="2025-01-29T11:01:21.107589913Z" level=info msg="TearDown network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" successfully" Jan 29 11:01:21.107687 containerd[1433]: time="2025-01-29T11:01:21.107672949Z" level=info msg="StopPodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" returns successfully" Jan 29 11:01:21.109069 systemd[1]: run-netns-cni\x2d8f26094a\x2d5005\x2dedca\x2d146a\x2d42ae522ff5d1.mount: Deactivated successfully. Jan 29 11:01:21.110338 containerd[1433]: time="2025-01-29T11:01:21.110168755Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" Jan 29 11:01:21.110338 containerd[1433]: time="2025-01-29T11:01:21.110255471Z" level=info msg="TearDown network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" successfully" Jan 29 11:01:21.110338 containerd[1433]: time="2025-01-29T11:01:21.110265031Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" returns successfully" Jan 29 11:01:21.110770 containerd[1433]: time="2025-01-29T11:01:21.110749769Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:21.110974 kubelet[2525]: I0129 11:01:21.110943 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454" Jan 29 11:01:21.111168 containerd[1433]: time="2025-01-29T11:01:21.111095393Z" level=info msg="TearDown network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" successfully" Jan 29 11:01:21.111168 containerd[1433]: time="2025-01-29T11:01:21.111112992Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" returns successfully" Jan 29 11:01:21.111507 containerd[1433]: time="2025-01-29T11:01:21.111472416Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:21.111600 containerd[1433]: time="2025-01-29T11:01:21.111559572Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:21.111600 containerd[1433]: time="2025-01-29T11:01:21.111574411Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:21.111781 containerd[1433]: time="2025-01-29T11:01:21.111716564Z" level=info msg="StopPodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\"" Jan 29 11:01:21.111897 containerd[1433]: time="2025-01-29T11:01:21.111849238Z" level=info msg="Ensure that sandbox 7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454 in task-service has been cleanup successfully" Jan 29 11:01:21.112101 containerd[1433]: time="2025-01-29T11:01:21.112047549Z" level=info msg="TearDown network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" successfully" Jan 29 11:01:21.112101 containerd[1433]: time="2025-01-29T11:01:21.112067348Z" level=info msg="StopPodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" returns successfully" Jan 29 11:01:21.112409 containerd[1433]: time="2025-01-29T11:01:21.112313217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:01:21.112878 containerd[1433]: time="2025-01-29T11:01:21.112801555Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" Jan 29 11:01:21.112878 containerd[1433]: time="2025-01-29T11:01:21.112874912Z" level=info msg="TearDown network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" successfully" Jan 29 11:01:21.112954 containerd[1433]: time="2025-01-29T11:01:21.112885311Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" returns successfully" Jan 29 11:01:21.113496 containerd[1433]: time="2025-01-29T11:01:21.113406167Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:21.113496 containerd[1433]: time="2025-01-29T11:01:21.113474644Z" level=info msg="TearDown network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" successfully" Jan 29 11:01:21.113496 containerd[1433]: time="2025-01-29T11:01:21.113490763Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" returns successfully" Jan 29 11:01:21.114171 containerd[1433]: time="2025-01-29T11:01:21.114014180Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:21.114171 containerd[1433]: time="2025-01-29T11:01:21.114109575Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:21.114171 containerd[1433]: time="2025-01-29T11:01:21.114121855Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:21.114289 kubelet[2525]: I0129 11:01:21.114103 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f" Jan 29 11:01:21.114289 kubelet[2525]: E0129 11:01:21.114272 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:21.114497 kubelet[2525]: E0129 11:01:21.114402 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:21.114831 containerd[1433]: time="2025-01-29T11:01:21.114802224Z" level=info msg="StopPodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\"" Jan 29 11:01:21.115119 containerd[1433]: time="2025-01-29T11:01:21.114946377Z" level=info msg="Ensure that sandbox 0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f in task-service has been cleanup successfully" Jan 29 11:01:21.115285 containerd[1433]: time="2025-01-29T11:01:21.115263003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:4,}" Jan 29 11:01:21.115451 containerd[1433]: time="2025-01-29T11:01:21.115297641Z" level=info msg="TearDown network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" successfully" Jan 29 11:01:21.115451 containerd[1433]: time="2025-01-29T11:01:21.115447314Z" level=info msg="StopPodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" returns successfully" Jan 29 11:01:21.115917 containerd[1433]: time="2025-01-29T11:01:21.115854696Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" Jan 29 11:01:21.115963 containerd[1433]: time="2025-01-29T11:01:21.115935692Z" level=info msg="TearDown network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" successfully" Jan 29 11:01:21.115963 containerd[1433]: time="2025-01-29T11:01:21.115945451Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" returns successfully" Jan 29 11:01:21.116397 containerd[1433]: time="2025-01-29T11:01:21.116225599Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:21.116609 containerd[1433]: time="2025-01-29T11:01:21.116549264Z" level=info msg="TearDown network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" successfully" Jan 29 11:01:21.116609 containerd[1433]: time="2025-01-29T11:01:21.116568783Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" returns successfully" Jan 29 11:01:21.117463 containerd[1433]: time="2025-01-29T11:01:21.117405265Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:21.117675 containerd[1433]: time="2025-01-29T11:01:21.117570257Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:21.117675 containerd[1433]: time="2025-01-29T11:01:21.117619575Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:21.117941 kubelet[2525]: E0129 11:01:21.117921 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:21.118449 containerd[1433]: time="2025-01-29T11:01:21.118422938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:4,}" Jan 29 11:01:21.272109 containerd[1433]: time="2025-01-29T11:01:21.271802460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:21.283168 containerd[1433]: time="2025-01-29T11:01:21.283099425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 11:01:21.289267 containerd[1433]: time="2025-01-29T11:01:21.289223586Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:21.301190 containerd[1433]: time="2025-01-29T11:01:21.300623146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:21.315638 containerd[1433]: time="2025-01-29T11:01:21.315587103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.314816336s" Jan 29 11:01:21.315754 containerd[1433]: time="2025-01-29T11:01:21.315682938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 11:01:21.328550 containerd[1433]: time="2025-01-29T11:01:21.328511833Z" level=info msg="CreateContainer within sandbox \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:01:21.351578 containerd[1433]: time="2025-01-29T11:01:21.351527583Z" level=info msg="CreateContainer within sandbox \"6c0b5fd74dbc93f9cbfbf95fb613e94134306866376b8212fa8ba3dd30cb0ce3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ee110280b36870cec9e124fca6d21c320db129eee40318060d78a632669722c6\"" Jan 29 11:01:21.356615 containerd[1433]: time="2025-01-29T11:01:21.356561473Z" level=info msg="StartContainer for \"ee110280b36870cec9e124fca6d21c320db129eee40318060d78a632669722c6\"" Jan 29 11:01:21.381577 containerd[1433]: time="2025-01-29T11:01:21.381529774Z" level=error msg="Failed to destroy network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.383304 containerd[1433]: time="2025-01-29T11:01:21.383272695Z" level=error msg="encountered an error cleaning up failed sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.383451 containerd[1433]: time="2025-01-29T11:01:21.383431207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.383935 kubelet[2525]: E0129 11:01:21.383892 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.384029 kubelet[2525]: E0129 11:01:21.383958 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:21.384029 kubelet[2525]: E0129 11:01:21.383981 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" Jan 29 11:01:21.384151 kubelet[2525]: E0129 11:01:21.384031 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-ddf92_calico-apiserver(bc8d959a-7f2e-4e77-bbe7-a5e311c45518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" podUID="bc8d959a-7f2e-4e77-bbe7-a5e311c45518" Jan 29 11:01:21.394870 containerd[1433]: time="2025-01-29T11:01:21.394819288Z" level=error msg="Failed to destroy network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.396739 containerd[1433]: time="2025-01-29T11:01:21.396694362Z" level=error msg="encountered an error cleaning up failed sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.396822 containerd[1433]: time="2025-01-29T11:01:21.396765919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.397087 kubelet[2525]: E0129 11:01:21.397039 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.397141 kubelet[2525]: E0129 11:01:21.397105 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:21.397141 kubelet[2525]: E0129 11:01:21.397127 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jfpvg" Jan 29 11:01:21.397254 kubelet[2525]: E0129 11:01:21.397167 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jfpvg_calico-system(7fb477ff-983f-4d5c-ba2e-5632face2710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jfpvg" podUID="7fb477ff-983f-4d5c-ba2e-5632face2710" Jan 29 11:01:21.398964 containerd[1433]: time="2025-01-29T11:01:21.398932260Z" level=error msg="Failed to destroy network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.400321 containerd[1433]: time="2025-01-29T11:01:21.400287878Z" level=error msg="encountered an error cleaning up failed sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.400576 containerd[1433]: time="2025-01-29T11:01:21.400461870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.400796 kubelet[2525]: E0129 11:01:21.400757 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.400875 kubelet[2525]: E0129 11:01:21.400804 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:21.400875 kubelet[2525]: E0129 11:01:21.400823 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" Jan 29 11:01:21.400875 kubelet[2525]: E0129 11:01:21.400853 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d55b65567-pc9f6_calico-system(e2a43c5c-ca07-4add-b171-f2255f364fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" podUID="e2a43c5c-ca07-4add-b171-f2255f364fd9" Jan 29 11:01:21.406394 containerd[1433]: time="2025-01-29T11:01:21.406358961Z" level=error msg="Failed to destroy network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.406771 containerd[1433]: time="2025-01-29T11:01:21.406744184Z" level=error msg="encountered an error cleaning up failed sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.407971 containerd[1433]: time="2025-01-29T11:01:21.407935289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.408319 kubelet[2525]: E0129 11:01:21.408280 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.408468 kubelet[2525]: E0129 11:01:21.408334 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:21.408468 kubelet[2525]: E0129 11:01:21.408351 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" Jan 29 11:01:21.408468 kubelet[2525]: E0129 11:01:21.408383 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548cc7dcc6-5f4ww_calico-apiserver(5810df49-717b-4ac3-90ec-8888521cc6d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" podUID="5810df49-717b-4ac3-90ec-8888521cc6d3" Jan 29 11:01:21.409236 containerd[1433]: time="2025-01-29T11:01:21.409200752Z" level=error msg="Failed to destroy network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.409528 containerd[1433]: time="2025-01-29T11:01:21.409482699Z" level=error msg="encountered an error cleaning up failed sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.409588 containerd[1433]: time="2025-01-29T11:01:21.409538056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.409785 kubelet[2525]: E0129 11:01:21.409688 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.409785 kubelet[2525]: E0129 11:01:21.409755 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:21.409785 kubelet[2525]: E0129 11:01:21.409771 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t46bf" Jan 29 11:01:21.409904 kubelet[2525]: E0129 11:01:21.409812 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t46bf_kube-system(fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t46bf" podUID="fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b" Jan 29 11:01:21.412408 containerd[1433]: time="2025-01-29T11:01:21.412248333Z" level=error msg="Failed to destroy network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.412922 containerd[1433]: time="2025-01-29T11:01:21.412893823Z" level=error msg="encountered an error cleaning up failed sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.413568 containerd[1433]: time="2025-01-29T11:01:21.413448158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.413776 kubelet[2525]: E0129 11:01:21.413744 2525 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:01:21.413851 kubelet[2525]: E0129 11:01:21.413787 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:21.413851 kubelet[2525]: E0129 11:01:21.413806 2525 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rzzz" Jan 29 11:01:21.415786 kubelet[2525]: E0129 11:01:21.413857 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rzzz_kube-system(f3a930f1-28d8-4b84-b302-2ee738d83501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rzzz" podUID="f3a930f1-28d8-4b84-b302-2ee738d83501" Jan 29 11:01:21.435267 systemd[1]: Started cri-containerd-ee110280b36870cec9e124fca6d21c320db129eee40318060d78a632669722c6.scope - libcontainer container ee110280b36870cec9e124fca6d21c320db129eee40318060d78a632669722c6. Jan 29 11:01:21.463027 containerd[1433]: time="2025-01-29T11:01:21.462912581Z" level=info msg="StartContainer for \"ee110280b36870cec9e124fca6d21c320db129eee40318060d78a632669722c6\" returns successfully" Jan 29 11:01:21.550439 systemd[1]: run-netns-cni\x2de030d43c\x2dc470\x2d8d2f\x2d04ad\x2df2417aa1493b.mount: Deactivated successfully. Jan 29 11:01:21.550540 systemd[1]: run-netns-cni\x2dc0fec86b\x2db6b6\x2d8d25\x2dcef4\x2d9cde10b7c035.mount: Deactivated successfully. Jan 29 11:01:21.550588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395858734.mount: Deactivated successfully. Jan 29 11:01:21.656340 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:01:21.656441 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:01:22.117824 kubelet[2525]: I0129 11:01:22.117785 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8" Jan 29 11:01:22.118351 containerd[1433]: time="2025-01-29T11:01:22.118304610Z" level=info msg="StopPodSandbox for \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\"" Jan 29 11:01:22.119371 containerd[1433]: time="2025-01-29T11:01:22.118489242Z" level=info msg="Ensure that sandbox 0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8 in task-service has been cleanup successfully" Jan 29 11:01:22.119371 containerd[1433]: time="2025-01-29T11:01:22.118779190Z" level=info msg="TearDown network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\" successfully" Jan 29 11:01:22.119371 containerd[1433]: time="2025-01-29T11:01:22.118794869Z" level=info msg="StopPodSandbox for \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\" returns successfully" Jan 29 11:01:22.119371 containerd[1433]: time="2025-01-29T11:01:22.119227851Z" level=info msg="StopPodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\"" Jan 29 11:01:22.119371 containerd[1433]: time="2025-01-29T11:01:22.119300648Z" level=info msg="TearDown network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" successfully" Jan 29 11:01:22.119371 containerd[1433]: time="2025-01-29T11:01:22.119309767Z" level=info msg="StopPodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" returns successfully" Jan 29 11:01:22.119809 containerd[1433]: time="2025-01-29T11:01:22.119748428Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" Jan 29 11:01:22.120303 containerd[1433]: time="2025-01-29T11:01:22.120211929Z" level=info msg="TearDown network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" successfully" Jan 29 11:01:22.120303 containerd[1433]: time="2025-01-29T11:01:22.120297405Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" returns successfully" Jan 29 11:01:22.120421 systemd[1]: run-netns-cni\x2d3fc90e92\x2d4b5d\x2db5a9\x2dcbc8\x2d870a15494d93.mount: Deactivated successfully. Jan 29 11:01:22.120909 containerd[1433]: time="2025-01-29T11:01:22.120794624Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:22.121016 containerd[1433]: time="2025-01-29T11:01:22.120887180Z" level=info msg="TearDown network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" successfully" Jan 29 11:01:22.121016 containerd[1433]: time="2025-01-29T11:01:22.120973416Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" returns successfully" Jan 29 11:01:22.121273 kubelet[2525]: I0129 11:01:22.121188 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561" Jan 29 11:01:22.121644 containerd[1433]: time="2025-01-29T11:01:22.121572790Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:22.121939 containerd[1433]: time="2025-01-29T11:01:22.121648467Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:22.121939 containerd[1433]: time="2025-01-29T11:01:22.121658947Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:22.121939 containerd[1433]: time="2025-01-29T11:01:22.121709504Z" level=info msg="StopPodSandbox for \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\"" Jan 29 11:01:22.121939 containerd[1433]: time="2025-01-29T11:01:22.121882857Z" level=info msg="Ensure that sandbox 8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561 in task-service has been cleanup successfully" Jan 29 11:01:22.122048 containerd[1433]: time="2025-01-29T11:01:22.122028811Z" level=info msg="TearDown network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\" successfully" Jan 29 11:01:22.122048 containerd[1433]: time="2025-01-29T11:01:22.122041090Z" level=info msg="StopPodSandbox for \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\" returns successfully" Jan 29 11:01:22.122488 containerd[1433]: time="2025-01-29T11:01:22.122252481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:01:22.122488 containerd[1433]: time="2025-01-29T11:01:22.122324438Z" level=info msg="StopPodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\"" Jan 29 11:01:22.122488 containerd[1433]: time="2025-01-29T11:01:22.122394915Z" level=info msg="TearDown network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" successfully" Jan 29 11:01:22.122488 containerd[1433]: time="2025-01-29T11:01:22.122403955Z" level=info msg="StopPodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" returns successfully" Jan 29 11:01:22.122793 containerd[1433]: time="2025-01-29T11:01:22.122623345Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" Jan 29 11:01:22.122793 containerd[1433]: time="2025-01-29T11:01:22.122696942Z" level=info msg="TearDown network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" successfully" Jan 29 11:01:22.122793 containerd[1433]: time="2025-01-29T11:01:22.122725701Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" returns successfully" Jan 29 11:01:22.123500 containerd[1433]: time="2025-01-29T11:01:22.123104205Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:22.123500 containerd[1433]: time="2025-01-29T11:01:22.123190201Z" level=info msg="TearDown network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" successfully" Jan 29 11:01:22.123500 containerd[1433]: time="2025-01-29T11:01:22.123201521Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" returns successfully" Jan 29 11:01:22.123991 containerd[1433]: time="2025-01-29T11:01:22.123548266Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:22.123991 containerd[1433]: time="2025-01-29T11:01:22.123630822Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:22.123991 containerd[1433]: time="2025-01-29T11:01:22.123640622Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:22.125819 kubelet[2525]: E0129 11:01:22.123800 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:22.124493 systemd[1]: run-netns-cni\x2d44f18ad5\x2d039b\x2de5e9\x2d8478\x2d0161fc257ff9.mount: Deactivated successfully. Jan 29 11:01:22.125966 containerd[1433]: time="2025-01-29T11:01:22.124868929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:5,}" Jan 29 11:01:22.126091 kubelet[2525]: I0129 11:01:22.126063 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb" Jan 29 11:01:22.127322 containerd[1433]: time="2025-01-29T11:01:22.126437862Z" level=info msg="StopPodSandbox for \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\"" Jan 29 11:01:22.127322 containerd[1433]: time="2025-01-29T11:01:22.126582216Z" level=info msg="Ensure that sandbox 6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb in task-service has been cleanup successfully" Jan 29 11:01:22.127322 containerd[1433]: time="2025-01-29T11:01:22.126988359Z" level=info msg="TearDown network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\" successfully" Jan 29 11:01:22.127450 containerd[1433]: time="2025-01-29T11:01:22.127004798Z" level=info msg="StopPodSandbox for \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\" returns successfully" Jan 29 11:01:22.128676 containerd[1433]: time="2025-01-29T11:01:22.127947998Z" level=info msg="StopPodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\"" Jan 29 11:01:22.128689 systemd[1]: run-netns-cni\x2d319ad54c\x2dc8f8\x2ddbba\x2df470\x2d3eeb03947c59.mount: Deactivated successfully. Jan 29 11:01:22.130625 containerd[1433]: time="2025-01-29T11:01:22.130545167Z" level=info msg="TearDown network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" successfully" Jan 29 11:01:22.130625 containerd[1433]: time="2025-01-29T11:01:22.130570165Z" level=info msg="StopPodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" returns successfully" Jan 29 11:01:22.131748 containerd[1433]: time="2025-01-29T11:01:22.131718196Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" Jan 29 11:01:22.131925 containerd[1433]: time="2025-01-29T11:01:22.131897189Z" level=info msg="TearDown network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" successfully" Jan 29 11:01:22.131955 containerd[1433]: time="2025-01-29T11:01:22.131917868Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" returns successfully" Jan 29 11:01:22.132227 containerd[1433]: time="2025-01-29T11:01:22.132207695Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:22.132317 containerd[1433]: time="2025-01-29T11:01:22.132294692Z" level=info msg="TearDown network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" successfully" Jan 29 11:01:22.132366 containerd[1433]: time="2025-01-29T11:01:22.132314811Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" returns successfully" Jan 29 11:01:22.132628 containerd[1433]: time="2025-01-29T11:01:22.132610398Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:22.133237 containerd[1433]: time="2025-01-29T11:01:22.133158815Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:22.133237 containerd[1433]: time="2025-01-29T11:01:22.133182414Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:22.134784 kubelet[2525]: E0129 11:01:22.133457 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:22.139296 containerd[1433]: time="2025-01-29T11:01:22.139262434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:5,}" Jan 29 11:01:22.141586 kubelet[2525]: E0129 11:01:22.141553 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:22.148755 kubelet[2525]: I0129 11:01:22.148724 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9" Jan 29 11:01:22.150752 containerd[1433]: time="2025-01-29T11:01:22.149911738Z" level=info msg="StopPodSandbox for \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\"" Jan 29 11:01:22.150752 containerd[1433]: time="2025-01-29T11:01:22.150063252Z" level=info msg="Ensure that sandbox b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9 in task-service has been cleanup successfully" Jan 29 11:01:22.150752 containerd[1433]: time="2025-01-29T11:01:22.150256323Z" level=info msg="TearDown network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\" successfully" Jan 29 11:01:22.150752 containerd[1433]: time="2025-01-29T11:01:22.150284722Z" level=info msg="StopPodSandbox for \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\" returns successfully" Jan 29 11:01:22.153102 containerd[1433]: time="2025-01-29T11:01:22.151651104Z" level=info msg="StopPodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\"" Jan 29 11:01:22.153102 containerd[1433]: time="2025-01-29T11:01:22.151926852Z" level=info msg="TearDown network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" successfully" Jan 29 11:01:22.153102 containerd[1433]: time="2025-01-29T11:01:22.152098205Z" level=info msg="StopPodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" returns successfully" Jan 29 11:01:22.153102 containerd[1433]: time="2025-01-29T11:01:22.152657461Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" Jan 29 11:01:22.153102 containerd[1433]: time="2025-01-29T11:01:22.152871212Z" level=info msg="TearDown network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" successfully" Jan 29 11:01:22.153102 containerd[1433]: time="2025-01-29T11:01:22.152883971Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" returns successfully" Jan 29 11:01:22.153066 systemd[1]: run-netns-cni\x2daeee1a87\x2dd2bd\x2d45e5\x2deddb\x2de604f10e8c10.mount: Deactivated successfully. Jan 29 11:01:22.153495 containerd[1433]: time="2025-01-29T11:01:22.153468586Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:22.154506 containerd[1433]: time="2025-01-29T11:01:22.154469023Z" level=info msg="TearDown network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" successfully" Jan 29 11:01:22.154506 containerd[1433]: time="2025-01-29T11:01:22.154493982Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" returns successfully" Jan 29 11:01:22.154845 kubelet[2525]: I0129 11:01:22.154810 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8" Jan 29 11:01:22.155322 containerd[1433]: time="2025-01-29T11:01:22.155289228Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:22.155388 containerd[1433]: time="2025-01-29T11:01:22.155375224Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:22.155388 containerd[1433]: time="2025-01-29T11:01:22.155385584Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:22.155879 containerd[1433]: time="2025-01-29T11:01:22.155835645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:5,}" Jan 29 11:01:22.156210 containerd[1433]: time="2025-01-29T11:01:22.156187630Z" level=info msg="StopPodSandbox for \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\"" Jan 29 11:01:22.156767 containerd[1433]: time="2025-01-29T11:01:22.156369662Z" level=info msg="Ensure that sandbox 33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8 in task-service has been cleanup successfully" Jan 29 11:01:22.156767 containerd[1433]: time="2025-01-29T11:01:22.156631891Z" level=info msg="TearDown network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\" successfully" Jan 29 11:01:22.156767 containerd[1433]: time="2025-01-29T11:01:22.156660330Z" level=info msg="StopPodSandbox for \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\" returns successfully" Jan 29 11:01:22.157656 containerd[1433]: time="2025-01-29T11:01:22.157430337Z" level=info msg="StopPodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\"" Jan 29 11:01:22.157656 containerd[1433]: time="2025-01-29T11:01:22.157530452Z" level=info msg="TearDown network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" successfully" Jan 29 11:01:22.157656 containerd[1433]: time="2025-01-29T11:01:22.157542132Z" level=info msg="StopPodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" returns successfully" Jan 29 11:01:22.158920 containerd[1433]: time="2025-01-29T11:01:22.157772682Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" Jan 29 11:01:22.158920 containerd[1433]: time="2025-01-29T11:01:22.157853678Z" level=info msg="TearDown network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" successfully" Jan 29 11:01:22.158920 containerd[1433]: time="2025-01-29T11:01:22.157863638Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" returns successfully" Jan 29 11:01:22.159819 containerd[1433]: time="2025-01-29T11:01:22.159506808Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:22.159819 containerd[1433]: time="2025-01-29T11:01:22.159581845Z" level=info msg="TearDown network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" successfully" Jan 29 11:01:22.159819 containerd[1433]: time="2025-01-29T11:01:22.159591204Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" returns successfully" Jan 29 11:01:22.161112 containerd[1433]: time="2025-01-29T11:01:22.160473206Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:22.161112 containerd[1433]: time="2025-01-29T11:01:22.160545963Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:22.161112 containerd[1433]: time="2025-01-29T11:01:22.160557083Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:22.161492 containerd[1433]: time="2025-01-29T11:01:22.161461684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:01:22.162459 kubelet[2525]: I0129 11:01:22.162419 2525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a" Jan 29 11:01:22.163206 containerd[1433]: time="2025-01-29T11:01:22.163123893Z" level=info msg="StopPodSandbox for \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\"" Jan 29 11:01:22.168252 kubelet[2525]: I0129 11:01:22.168199 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7qwp5" podStartSLOduration=2.1453020560000002 podStartE2EDuration="15.168185197s" podCreationTimestamp="2025-01-29 11:01:07 +0000 UTC" firstStartedPulling="2025-01-29 11:01:08.296439071 +0000 UTC m=+12.483064736" lastFinishedPulling="2025-01-29 11:01:21.319322292 +0000 UTC m=+25.505947877" observedRunningTime="2025-01-29 11:01:22.166762337 +0000 UTC m=+26.353387962" watchObservedRunningTime="2025-01-29 11:01:22.168185197 +0000 UTC m=+26.354810782" Jan 29 11:01:22.171860 containerd[1433]: time="2025-01-29T11:01:22.171426418Z" level=info msg="Ensure that sandbox 33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a in task-service has been cleanup successfully" Jan 29 11:01:22.171860 containerd[1433]: time="2025-01-29T11:01:22.171657368Z" level=info msg="TearDown network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\" successfully" Jan 29 11:01:22.171860 containerd[1433]: time="2025-01-29T11:01:22.171670727Z" level=info msg="StopPodSandbox for \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\" returns successfully" Jan 29 11:01:22.172626 containerd[1433]: time="2025-01-29T11:01:22.172600088Z" level=info msg="StopPodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\"" Jan 29 11:01:22.172811 containerd[1433]: time="2025-01-29T11:01:22.172793959Z" level=info msg="TearDown network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" successfully" Jan 29 11:01:22.172870 containerd[1433]: time="2025-01-29T11:01:22.172857757Z" level=info msg="StopPodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" returns successfully" Jan 29 11:01:22.173287 containerd[1433]: time="2025-01-29T11:01:22.173265859Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" Jan 29 11:01:22.173457 containerd[1433]: time="2025-01-29T11:01:22.173431412Z" level=info msg="TearDown network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" successfully" Jan 29 11:01:22.173535 containerd[1433]: time="2025-01-29T11:01:22.173521048Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" returns successfully" Jan 29 11:01:22.174026 containerd[1433]: time="2025-01-29T11:01:22.174001708Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:22.174204 containerd[1433]: time="2025-01-29T11:01:22.174186620Z" level=info msg="TearDown network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" successfully" Jan 29 11:01:22.174376 containerd[1433]: time="2025-01-29T11:01:22.174266936Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" returns successfully" Jan 29 11:01:22.175126 containerd[1433]: time="2025-01-29T11:01:22.174830392Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:22.175126 containerd[1433]: time="2025-01-29T11:01:22.174908109Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:22.175126 containerd[1433]: time="2025-01-29T11:01:22.174917629Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:22.175386 containerd[1433]: time="2025-01-29T11:01:22.175337691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:5,}" Jan 29 11:01:22.557023 systemd[1]: run-netns-cni\x2dd9a1d03d\x2d89c9\x2d02e8\x2d764e\x2d4e1a9376d7e2.mount: Deactivated successfully. Jan 29 11:01:22.557127 systemd[1]: run-netns-cni\x2d04b4bca0\x2d206f\x2da3e8\x2d7b38\x2db27bcf2b8e35.mount: Deactivated successfully. Jan 29 11:01:22.766329 systemd-networkd[1344]: cali79eec63eb6c: Link UP Jan 29 11:01:22.766602 systemd-networkd[1344]: cali79eec63eb6c: Gained carrier Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.345 [INFO][4550] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.456 [INFO][4550] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jfpvg-eth0 csi-node-driver- calico-system 7fb477ff-983f-4d5c-ba2e-5632face2710 602 0 2025-01-29 11:01:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jfpvg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali79eec63eb6c [] []}} ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.456 [INFO][4550] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.713 [INFO][4596] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" HandleID="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Workload="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.728 [INFO][4596] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" HandleID="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Workload="localhost-k8s-csi--node--driver--jfpvg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000184da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jfpvg", "timestamp":"2025-01-29 11:01:22.713511071 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.729 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.729 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.729 [INFO][4596] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.731 [INFO][4596] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.739 [INFO][4596] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.744 [INFO][4596] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.745 [INFO][4596] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.747 [INFO][4596] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.747 [INFO][4596] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.749 [INFO][4596] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.753 [INFO][4596] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.757 [INFO][4596] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.757 [INFO][4596] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" host="localhost" Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.757 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:01:22.783566 containerd[1433]: 2025-01-29 11:01:22.757 [INFO][4596] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" HandleID="k8s-pod-network.d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Workload="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.784257 containerd[1433]: 2025-01-29 11:01:22.759 [INFO][4550] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jfpvg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7fb477ff-983f-4d5c-ba2e-5632face2710", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jfpvg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79eec63eb6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:22.784257 containerd[1433]: 2025-01-29 11:01:22.759 [INFO][4550] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.784257 containerd[1433]: 2025-01-29 11:01:22.759 [INFO][4550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79eec63eb6c ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.784257 containerd[1433]: 2025-01-29 11:01:22.767 [INFO][4550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.784257 containerd[1433]: 2025-01-29 11:01:22.767 [INFO][4550] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jfpvg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7fb477ff-983f-4d5c-ba2e-5632face2710", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b", Pod:"csi-node-driver-jfpvg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79eec63eb6c", MAC:"32:b9:40:c4:66:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:22.784257 containerd[1433]: 2025-01-29 11:01:22.780 [INFO][4550] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b" Namespace="calico-system" Pod="csi-node-driver-jfpvg" WorkloadEndpoint="localhost-k8s-csi--node--driver--jfpvg-eth0" Jan 29 11:01:22.803913 containerd[1433]: time="2025-01-29T11:01:22.803694214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:22.803913 containerd[1433]: time="2025-01-29T11:01:22.803762891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:22.803913 containerd[1433]: time="2025-01-29T11:01:22.803778450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:22.803913 containerd[1433]: time="2025-01-29T11:01:22.803859647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:22.826233 systemd[1]: Started cri-containerd-d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b.scope - libcontainer container d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b. Jan 29 11:01:22.837569 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:01:22.851551 containerd[1433]: time="2025-01-29T11:01:22.851404773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jfpvg,Uid:7fb477ff-983f-4d5c-ba2e-5632face2710,Namespace:calico-system,Attempt:5,} returns sandbox id \"d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b\"" Jan 29 11:01:22.854576 containerd[1433]: time="2025-01-29T11:01:22.854407245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:01:22.868434 systemd-networkd[1344]: cali4baba3c62cc: Link UP Jan 29 11:01:22.868947 systemd-networkd[1344]: cali4baba3c62cc: Gained carrier Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.200 [INFO][4481] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.443 [INFO][4481] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0 calico-apiserver-548cc7dcc6- calico-apiserver 5810df49-717b-4ac3-90ec-8888521cc6d3 688 0 2025-01-29 11:01:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548cc7dcc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-548cc7dcc6-5f4ww eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4baba3c62cc [] []}} ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.446 [INFO][4481] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.713 [INFO][4594] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" HandleID="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Workload="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.729 [INFO][4594] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" HandleID="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Workload="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006827b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-548cc7dcc6-5f4ww", "timestamp":"2025-01-29 11:01:22.713610067 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.729 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.757 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.757 [INFO][4594] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.832 [INFO][4594] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.837 [INFO][4594] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.849 [INFO][4594] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.851 [INFO][4594] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.853 [INFO][4594] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.853 [INFO][4594] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.855 [INFO][4594] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.860 [INFO][4594] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.865 [INFO][4594] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.865 [INFO][4594] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" host="localhost" Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.865 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:01:22.882985 containerd[1433]: 2025-01-29 11:01:22.865 [INFO][4594] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" HandleID="k8s-pod-network.350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Workload="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.883635 containerd[1433]: 2025-01-29 11:01:22.867 [INFO][4481] cni-plugin/k8s.go 386: Populated endpoint ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0", GenerateName:"calico-apiserver-548cc7dcc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5810df49-717b-4ac3-90ec-8888521cc6d3", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548cc7dcc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-548cc7dcc6-5f4ww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4baba3c62cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:22.883635 containerd[1433]: 2025-01-29 11:01:22.867 [INFO][4481] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.883635 containerd[1433]: 2025-01-29 11:01:22.867 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4baba3c62cc ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.883635 containerd[1433]: 2025-01-29 11:01:22.868 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.883635 containerd[1433]: 2025-01-29 11:01:22.869 [INFO][4481] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0", GenerateName:"calico-apiserver-548cc7dcc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5810df49-717b-4ac3-90ec-8888521cc6d3", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548cc7dcc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa", Pod:"calico-apiserver-548cc7dcc6-5f4ww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4baba3c62cc", MAC:"fe:17:47:48:e6:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:22.883635 containerd[1433]: 2025-01-29 11:01:22.878 [INFO][4481] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-5f4ww" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--5f4ww-eth0" Jan 29 11:01:22.920071 containerd[1433]: time="2025-01-29T11:01:22.919945241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:22.920071 containerd[1433]: time="2025-01-29T11:01:22.920014118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:22.920071 containerd[1433]: time="2025-01-29T11:01:22.920028878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:22.920997 containerd[1433]: time="2025-01-29T11:01:22.920914200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:22.972253 systemd[1]: Started cri-containerd-350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa.scope - libcontainer container 350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa. Jan 29 11:01:23.005634 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:01:23.039883 containerd[1433]: time="2025-01-29T11:01:23.039824857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-5f4ww,Uid:5810df49-717b-4ac3-90ec-8888521cc6d3,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa\"" Jan 29 11:01:23.046651 systemd-networkd[1344]: cali514a41062a6: Link UP Jan 29 11:01:23.046866 systemd-networkd[1344]: cali514a41062a6: Gained carrier Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.279 [INFO][4526] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.442 [INFO][4526] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0 calico-apiserver-548cc7dcc6- calico-apiserver bc8d959a-7f2e-4e77-bbe7-a5e311c45518 690 0 2025-01-29 11:01:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548cc7dcc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-548cc7dcc6-ddf92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali514a41062a6 [] []}} ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.442 [INFO][4526] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.713 [INFO][4588] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" HandleID="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Workload="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.735 [INFO][4588] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" HandleID="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Workload="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000361d20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-548cc7dcc6-ddf92", "timestamp":"2025-01-29 11:01:22.713969731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.735 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.865 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.865 [INFO][4588] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.933 [INFO][4588] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.954 [INFO][4588] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.971 [INFO][4588] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.994 [INFO][4588] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.997 [INFO][4588] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.997 [INFO][4588] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:22.999 [INFO][4588] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89 Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:23.009 [INFO][4588] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:23.033 [INFO][4588] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:23.033 [INFO][4588] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" host="localhost" Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:23.033 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:01:23.066126 containerd[1433]: 2025-01-29 11:01:23.033 [INFO][4588] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" HandleID="k8s-pod-network.db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Workload="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.067123 containerd[1433]: 2025-01-29 11:01:23.039 [INFO][4526] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0", GenerateName:"calico-apiserver-548cc7dcc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc8d959a-7f2e-4e77-bbe7-a5e311c45518", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548cc7dcc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-548cc7dcc6-ddf92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali514a41062a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.067123 containerd[1433]: 2025-01-29 11:01:23.039 [INFO][4526] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.067123 containerd[1433]: 2025-01-29 11:01:23.039 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali514a41062a6 ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.067123 containerd[1433]: 2025-01-29 11:01:23.046 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.067123 containerd[1433]: 2025-01-29 11:01:23.050 [INFO][4526] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0", GenerateName:"calico-apiserver-548cc7dcc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc8d959a-7f2e-4e77-bbe7-a5e311c45518", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548cc7dcc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89", Pod:"calico-apiserver-548cc7dcc6-ddf92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali514a41062a6", MAC:"d6:0f:ac:c9:f3:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.067123 containerd[1433]: 2025-01-29 11:01:23.063 [INFO][4526] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89" Namespace="calico-apiserver" Pod="calico-apiserver-548cc7dcc6-ddf92" WorkloadEndpoint="localhost-k8s-calico--apiserver--548cc7dcc6--ddf92-eth0" Jan 29 11:01:23.090898 containerd[1433]: time="2025-01-29T11:01:23.090773494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:23.091364 containerd[1433]: time="2025-01-29T11:01:23.090878370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:23.091716 containerd[1433]: time="2025-01-29T11:01:23.091510665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.092163 containerd[1433]: time="2025-01-29T11:01:23.091987045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.105704 systemd-networkd[1344]: cali92694aeb279: Link UP Jan 29 11:01:23.105853 systemd-networkd[1344]: cali92694aeb279: Gained carrier Jan 29 11:01:23.112289 systemd[1]: Started cri-containerd-db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89.scope - libcontainer container db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89. Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:22.244 [INFO][4508] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:22.443 [INFO][4508] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--t46bf-eth0 coredns-668d6bf9bc- kube-system fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b 687 0 2025-01-29 11:01:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-t46bf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92694aeb279 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:22.443 [INFO][4508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:22.713 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" HandleID="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Workload="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:22.735 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" HandleID="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Workload="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019d560), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-t46bf", "timestamp":"2025-01-29 11:01:22.713511271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:22.736 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.033 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.033 [INFO][4586] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.043 [INFO][4586] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.064 [INFO][4586] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.072 [INFO][4586] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.077 [INFO][4586] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.082 [INFO][4586] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.082 [INFO][4586] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.085 [INFO][4586] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.090 [INFO][4586] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.096 [INFO][4586] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.096 [INFO][4586] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" host="localhost" Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.096 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:01:23.121269 containerd[1433]: 2025-01-29 11:01:23.096 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" HandleID="k8s-pod-network.4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Workload="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.122747 containerd[1433]: 2025-01-29 11:01:23.100 [INFO][4508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t46bf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-t46bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92694aeb279", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.122747 containerd[1433]: 2025-01-29 11:01:23.101 [INFO][4508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.122747 containerd[1433]: 2025-01-29 11:01:23.101 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92694aeb279 ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.122747 containerd[1433]: 2025-01-29 11:01:23.105 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.122747 containerd[1433]: 2025-01-29 11:01:23.107 [INFO][4508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t46bf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc", Pod:"coredns-668d6bf9bc-t46bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92694aeb279", MAC:"f6:fb:c2:40:c6:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.122747 containerd[1433]: 2025-01-29 11:01:23.116 [INFO][4508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t46bf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t46bf-eth0" Jan 29 11:01:23.142190 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:01:23.173217 containerd[1433]: time="2025-01-29T11:01:23.173171430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548cc7dcc6-ddf92,Uid:bc8d959a-7f2e-4e77-bbe7-a5e311c45518,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89\"" Jan 29 11:01:23.179964 containerd[1433]: time="2025-01-29T11:01:23.179053154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:23.180232 containerd[1433]: time="2025-01-29T11:01:23.180159390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:23.180303 containerd[1433]: time="2025-01-29T11:01:23.180220267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.180727 containerd[1433]: time="2025-01-29T11:01:23.180568853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.184974 kubelet[2525]: E0129 11:01:23.184899 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:23.208329 systemd[1]: Started cri-containerd-4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc.scope - libcontainer container 4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc. Jan 29 11:01:23.226127 systemd-networkd[1344]: calif7fce1b4cda: Link UP Jan 29 11:01:23.226353 systemd-networkd[1344]: calif7fce1b4cda: Gained carrier Jan 29 11:01:23.229170 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:22.301 [INFO][4509] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:22.443 [INFO][4509] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0 calico-kube-controllers-5d55b65567- calico-system e2a43c5c-ca07-4add-b171-f2255f364fd9 689 0 2025-01-29 11:01:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d55b65567 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d55b65567-pc9f6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif7fce1b4cda [] []}} ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:22.443 [INFO][4509] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:22.719 [INFO][4587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" HandleID="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Workload="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:22.737 [INFO][4587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" HandleID="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Workload="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000614780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d55b65567-pc9f6", "timestamp":"2025-01-29 11:01:22.719430298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:22.737 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.097 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.097 [INFO][4587] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.144 [INFO][4587] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.162 [INFO][4587] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.173 [INFO][4587] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.177 [INFO][4587] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.188 [INFO][4587] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.190 [INFO][4587] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.193 [INFO][4587] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4 Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.200 [INFO][4587] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.209 [INFO][4587] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.209 [INFO][4587] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" host="localhost" Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.211 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:01:23.244911 containerd[1433]: 2025-01-29 11:01:23.211 [INFO][4587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" HandleID="k8s-pod-network.364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Workload="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.246093 containerd[1433]: 2025-01-29 11:01:23.218 [INFO][4509] cni-plugin/k8s.go 386: Populated endpoint ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0", GenerateName:"calico-kube-controllers-5d55b65567-", Namespace:"calico-system", SelfLink:"", UID:"e2a43c5c-ca07-4add-b171-f2255f364fd9", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d55b65567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d55b65567-pc9f6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7fce1b4cda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.246093 containerd[1433]: 2025-01-29 11:01:23.219 [INFO][4509] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.246093 containerd[1433]: 2025-01-29 11:01:23.220 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7fce1b4cda ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.246093 containerd[1433]: 2025-01-29 11:01:23.225 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.246093 containerd[1433]: 2025-01-29 11:01:23.226 [INFO][4509] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0", GenerateName:"calico-kube-controllers-5d55b65567-", Namespace:"calico-system", SelfLink:"", UID:"e2a43c5c-ca07-4add-b171-f2255f364fd9", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d55b65567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4", Pod:"calico-kube-controllers-5d55b65567-pc9f6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7fce1b4cda", MAC:"c6:81:70:7b:94:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.246093 containerd[1433]: 2025-01-29 11:01:23.242 [INFO][4509] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4" Namespace="calico-system" Pod="calico-kube-controllers-5d55b65567-pc9f6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d55b65567--pc9f6-eth0" Jan 29 11:01:23.256156 kernel: bpftool[4989]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:01:23.273675 containerd[1433]: time="2025-01-29T11:01:23.273613842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t46bf,Uid:fe5a11eb-34f8-4ac2-b56a-f4cc11926f6b,Namespace:kube-system,Attempt:5,} returns sandbox id \"4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc\"" Jan 29 11:01:23.275669 kubelet[2525]: E0129 11:01:23.275631 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:23.280492 containerd[1433]: time="2025-01-29T11:01:23.279646200Z" level=info msg="CreateContainer within sandbox \"4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:01:23.300992 containerd[1433]: time="2025-01-29T11:01:23.291557443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:23.301625 containerd[1433]: time="2025-01-29T11:01:23.300866269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:23.301625 containerd[1433]: time="2025-01-29T11:01:23.300980825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.301625 containerd[1433]: time="2025-01-29T11:01:23.301394048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.316986 systemd-networkd[1344]: caliac50ced4e8d: Link UP Jan 29 11:01:23.317540 systemd-networkd[1344]: caliac50ced4e8d: Gained carrier Jan 29 11:01:23.318687 containerd[1433]: time="2025-01-29T11:01:23.318142017Z" level=info msg="CreateContainer within sandbox \"4f5a83878ddd8b74c0e6ec0db93abb26a1a14c81e770fef05dc4aa2b6ddb07dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c362fac0b66b6a5ab3b76f27f14c08f8603ecea465bbd59f38da9bb627824d25\"" Jan 29 11:01:23.323553 containerd[1433]: time="2025-01-29T11:01:23.321730953Z" level=info msg="StartContainer for \"c362fac0b66b6a5ab3b76f27f14c08f8603ecea465bbd59f38da9bb627824d25\"" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:22.245 [INFO][4494] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:22.448 [INFO][4494] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0 coredns-668d6bf9bc- kube-system f3a930f1-28d8-4b84-b302-2ee738d83501 683 0 2025-01-29 11:01:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6rzzz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac50ced4e8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:22.448 [INFO][4494] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:22.713 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" HandleID="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Workload="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:22.738 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" HandleID="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Workload="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003861e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6rzzz", "timestamp":"2025-01-29 11:01:22.713787739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:22.738 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.211 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.212 [INFO][4589] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.244 [INFO][4589] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.261 [INFO][4589] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.274 [INFO][4589] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.279 [INFO][4589] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.285 [INFO][4589] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.285 [INFO][4589] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.287 [INFO][4589] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5 Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.295 [INFO][4589] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.309 [INFO][4589] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.309 [INFO][4589] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" host="localhost" Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.310 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:01:23.340162 containerd[1433]: 2025-01-29 11:01:23.310 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" HandleID="k8s-pod-network.6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Workload="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.340827 containerd[1433]: 2025-01-29 11:01:23.314 [INFO][4494] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3a930f1-28d8-4b84-b302-2ee738d83501", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6rzzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac50ced4e8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.340827 containerd[1433]: 2025-01-29 11:01:23.315 [INFO][4494] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.340827 containerd[1433]: 2025-01-29 11:01:23.315 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac50ced4e8d ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.340827 containerd[1433]: 2025-01-29 11:01:23.317 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.340827 containerd[1433]: 2025-01-29 11:01:23.320 [INFO][4494] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3a930f1-28d8-4b84-b302-2ee738d83501", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5", Pod:"coredns-668d6bf9bc-6rzzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac50ced4e8d", MAC:"32:ec:4f:be:c5:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:01:23.340827 containerd[1433]: 2025-01-29 11:01:23.333 [INFO][4494] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rzzz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rzzz-eth0" Jan 29 11:01:23.345275 systemd[1]: Started cri-containerd-364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4.scope - libcontainer container 364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4. Jan 29 11:01:23.366458 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:01:23.372304 systemd[1]: Started cri-containerd-c362fac0b66b6a5ab3b76f27f14c08f8603ecea465bbd59f38da9bb627824d25.scope - libcontainer container c362fac0b66b6a5ab3b76f27f14c08f8603ecea465bbd59f38da9bb627824d25. Jan 29 11:01:23.373874 containerd[1433]: time="2025-01-29T11:01:23.373769346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:01:23.373874 containerd[1433]: time="2025-01-29T11:01:23.373830864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:01:23.373874 containerd[1433]: time="2025-01-29T11:01:23.373842823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.374148 containerd[1433]: time="2025-01-29T11:01:23.373923300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:01:23.388237 systemd[1]: Started cri-containerd-6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5.scope - libcontainer container 6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5. Jan 29 11:01:23.401497 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:01:23.404589 containerd[1433]: time="2025-01-29T11:01:23.404559031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d55b65567-pc9f6,Uid:e2a43c5c-ca07-4add-b171-f2255f364fd9,Namespace:calico-system,Attempt:5,} returns sandbox id \"364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4\"" Jan 29 11:01:23.419899 containerd[1433]: time="2025-01-29T11:01:23.419860978Z" level=info msg="StartContainer for \"c362fac0b66b6a5ab3b76f27f14c08f8603ecea465bbd59f38da9bb627824d25\" returns successfully" Jan 29 11:01:23.427099 containerd[1433]: time="2025-01-29T11:01:23.426983052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rzzz,Uid:f3a930f1-28d8-4b84-b302-2ee738d83501,Namespace:kube-system,Attempt:5,} returns sandbox id \"6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5\"" Jan 29 11:01:23.428109 kubelet[2525]: E0129 11:01:23.427808 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:23.430819 containerd[1433]: time="2025-01-29T11:01:23.430425114Z" level=info msg="CreateContainer within sandbox \"6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:01:23.457032 containerd[1433]: time="2025-01-29T11:01:23.456986609Z" level=info msg="CreateContainer within sandbox \"6d37b5e971570e049399da74ff7249085ad22b595b92efbbe178c3e635a4cba5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be835beca092a4efcf0740ef73c8b4a3e2911ff6665adfe3264a470e7e360ac4\"" Jan 29 11:01:23.459156 containerd[1433]: time="2025-01-29T11:01:23.457901132Z" level=info msg="StartContainer for \"be835beca092a4efcf0740ef73c8b4a3e2911ff6665adfe3264a470e7e360ac4\"" Jan 29 11:01:23.502274 systemd-networkd[1344]: vxlan.calico: Link UP Jan 29 11:01:23.502281 systemd-networkd[1344]: vxlan.calico: Gained carrier Jan 29 11:01:23.503239 systemd[1]: Started cri-containerd-be835beca092a4efcf0740ef73c8b4a3e2911ff6665adfe3264a470e7e360ac4.scope - libcontainer container be835beca092a4efcf0740ef73c8b4a3e2911ff6665adfe3264a470e7e360ac4. Jan 29 11:01:23.582231 containerd[1433]: time="2025-01-29T11:01:23.582180349Z" level=info msg="StartContainer for \"be835beca092a4efcf0740ef73c8b4a3e2911ff6665adfe3264a470e7e360ac4\" returns successfully" Jan 29 11:01:23.929231 containerd[1433]: time="2025-01-29T11:01:23.929178754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:23.929768 containerd[1433]: time="2025-01-29T11:01:23.929728732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 11:01:23.930454 containerd[1433]: time="2025-01-29T11:01:23.930432024Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:23.932428 containerd[1433]: time="2025-01-29T11:01:23.932401625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:23.933070 containerd[1433]: time="2025-01-29T11:01:23.933038119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.078253851s" Jan 29 11:01:23.933070 containerd[1433]: time="2025-01-29T11:01:23.933066718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 11:01:23.934107 containerd[1433]: time="2025-01-29T11:01:23.934026560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:01:23.935107 containerd[1433]: time="2025-01-29T11:01:23.935011560Z" level=info msg="CreateContainer within sandbox \"d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:01:23.958370 containerd[1433]: time="2025-01-29T11:01:23.957834965Z" level=info msg="CreateContainer within sandbox \"d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ce154b250236e3888db9469eb7b7341544ea05b7c34f2239d9facaaa3fc6bda3\"" Jan 29 11:01:23.958532 containerd[1433]: time="2025-01-29T11:01:23.958505738Z" level=info msg="StartContainer for \"ce154b250236e3888db9469eb7b7341544ea05b7c34f2239d9facaaa3fc6bda3\"" Jan 29 11:01:23.958650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229207420.mount: Deactivated successfully. Jan 29 11:01:23.982246 systemd[1]: Started cri-containerd-ce154b250236e3888db9469eb7b7341544ea05b7c34f2239d9facaaa3fc6bda3.scope - libcontainer container ce154b250236e3888db9469eb7b7341544ea05b7c34f2239d9facaaa3fc6bda3. Jan 29 11:01:24.007511 containerd[1433]: time="2025-01-29T11:01:24.007471831Z" level=info msg="StartContainer for \"ce154b250236e3888db9469eb7b7341544ea05b7c34f2239d9facaaa3fc6bda3\" returns successfully" Jan 29 11:01:24.146397 systemd-networkd[1344]: cali4baba3c62cc: Gained IPv6LL Jan 29 11:01:24.146646 systemd-networkd[1344]: cali79eec63eb6c: Gained IPv6LL Jan 29 11:01:24.146808 systemd-networkd[1344]: cali92694aeb279: Gained IPv6LL Jan 29 11:01:24.202388 kubelet[2525]: E0129 11:01:24.202298 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:24.211545 kubelet[2525]: E0129 11:01:24.211412 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:24.211545 kubelet[2525]: E0129 11:01:24.211518 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:24.218902 kubelet[2525]: I0129 11:01:24.217872 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t46bf" podStartSLOduration=22.217857161 podStartE2EDuration="22.217857161s" podCreationTimestamp="2025-01-29 11:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:01:24.216537971 +0000 UTC m=+28.403163596" watchObservedRunningTime="2025-01-29 11:01:24.217857161 +0000 UTC m=+28.404482786" Jan 29 11:01:24.273130 kubelet[2525]: I0129 11:01:24.272958 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6rzzz" podStartSLOduration=22.272940091 podStartE2EDuration="22.272940091s" podCreationTimestamp="2025-01-29 11:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:01:24.253920806 +0000 UTC m=+28.440546431" watchObservedRunningTime="2025-01-29 11:01:24.272940091 +0000 UTC m=+28.459565676" Jan 29 11:01:24.658221 systemd-networkd[1344]: calif7fce1b4cda: Gained IPv6LL Jan 29 11:01:24.848654 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:46832.service - OpenSSH per-connection server daemon (10.0.0.1:46832). Jan 29 11:01:24.850188 systemd-networkd[1344]: vxlan.calico: Gained IPv6LL Jan 29 11:01:24.850472 systemd-networkd[1344]: cali514a41062a6: Gained IPv6LL Jan 29 11:01:24.912634 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 46832 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:24.914049 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:24.914436 systemd-networkd[1344]: caliac50ced4e8d: Gained IPv6LL Jan 29 11:01:24.918111 systemd-logind[1416]: New session 8 of user core. Jan 29 11:01:24.928460 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:01:25.122673 sshd[5327]: Connection closed by 10.0.0.1 port 46832 Jan 29 11:01:25.123381 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:25.127957 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:46832.service: Deactivated successfully. Jan 29 11:01:25.129855 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:01:25.130548 systemd-logind[1416]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:01:25.131471 systemd-logind[1416]: Removed session 8. Jan 29 11:01:25.209786 kubelet[2525]: E0129 11:01:25.209679 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:25.210318 kubelet[2525]: E0129 11:01:25.210290 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:25.656000 containerd[1433]: time="2025-01-29T11:01:25.655900439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:25.657185 containerd[1433]: time="2025-01-29T11:01:25.656991920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 11:01:25.657851 containerd[1433]: time="2025-01-29T11:01:25.657821331Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:25.660531 containerd[1433]: time="2025-01-29T11:01:25.660497757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:25.661330 containerd[1433]: time="2025-01-29T11:01:25.661256850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.727200771s" Jan 29 11:01:25.661330 containerd[1433]: time="2025-01-29T11:01:25.661287609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:01:25.662639 containerd[1433]: time="2025-01-29T11:01:25.662599003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:01:25.664726 containerd[1433]: time="2025-01-29T11:01:25.664352141Z" level=info msg="CreateContainer within sandbox \"350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:01:25.678562 containerd[1433]: time="2025-01-29T11:01:25.678517602Z" level=info msg="CreateContainer within sandbox \"350fc60ac7eff05cfe06429ad75e8b0217df4498da9f5b5bd56e833c119759aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2602d47358a5bc5a67e244f6fed3b824c0130d63ab3485c6a7039c6404479979\"" Jan 29 11:01:25.678999 containerd[1433]: time="2025-01-29T11:01:25.678965986Z" level=info msg="StartContainer for \"2602d47358a5bc5a67e244f6fed3b824c0130d63ab3485c6a7039c6404479979\"" Jan 29 11:01:25.727266 systemd[1]: Started cri-containerd-2602d47358a5bc5a67e244f6fed3b824c0130d63ab3485c6a7039c6404479979.scope - libcontainer container 2602d47358a5bc5a67e244f6fed3b824c0130d63ab3485c6a7039c6404479979. Jan 29 11:01:25.848363 containerd[1433]: time="2025-01-29T11:01:25.848301319Z" level=info msg="StartContainer for \"2602d47358a5bc5a67e244f6fed3b824c0130d63ab3485c6a7039c6404479979\" returns successfully" Jan 29 11:01:25.963431 containerd[1433]: time="2025-01-29T11:01:25.963302333Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:25.966389 containerd[1433]: time="2025-01-29T11:01:25.964979789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:01:25.971460 containerd[1433]: time="2025-01-29T11:01:25.971414021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 308.612266ms" Jan 29 11:01:25.971460 containerd[1433]: time="2025-01-29T11:01:25.971455820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:01:25.973239 containerd[1433]: time="2025-01-29T11:01:25.973029079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:01:25.975404 containerd[1433]: time="2025-01-29T11:01:25.975359989Z" level=info msg="CreateContainer within sandbox \"db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:01:25.989207 containerd[1433]: time="2025-01-29T11:01:25.989172058Z" level=info msg="CreateContainer within sandbox \"db1ee2b2b2403545491c5e4db137409517b031d3164547f2e3e02d794da81e89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"364f3c660f7e33ed0173a514377737751624293d154991ffb057476bb54b1c6a\"" Jan 29 11:01:25.990128 containerd[1433]: time="2025-01-29T11:01:25.990072583Z" level=info msg="StartContainer for \"364f3c660f7e33ed0173a514377737751624293d154991ffb057476bb54b1c6a\"" Jan 29 11:01:26.015292 systemd[1]: Started cri-containerd-364f3c660f7e33ed0173a514377737751624293d154991ffb057476bb54b1c6a.scope - libcontainer container 364f3c660f7e33ed0173a514377737751624293d154991ffb057476bb54b1c6a. Jan 29 11:01:26.059234 containerd[1433]: time="2025-01-29T11:01:26.059184747Z" level=info msg="StartContainer for \"364f3c660f7e33ed0173a514377737751624293d154991ffb057476bb54b1c6a\" returns successfully" Jan 29 11:01:26.223657 kubelet[2525]: E0129 11:01:26.223549 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:26.230197 kubelet[2525]: I0129 11:01:26.230145 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548cc7dcc6-ddf92" podStartSLOduration=16.434141765 podStartE2EDuration="19.230131408s" podCreationTimestamp="2025-01-29 11:01:07 +0000 UTC" firstStartedPulling="2025-01-29 11:01:23.176239387 +0000 UTC m=+27.362865012" lastFinishedPulling="2025-01-29 11:01:25.97222903 +0000 UTC m=+30.158854655" observedRunningTime="2025-01-29 11:01:26.229545865 +0000 UTC m=+30.416171490" watchObservedRunningTime="2025-01-29 11:01:26.230131408 +0000 UTC m=+30.416757033" Jan 29 11:01:26.245903 kubelet[2525]: I0129 11:01:26.245848 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548cc7dcc6-5f4ww" podStartSLOduration=16.627613463 podStartE2EDuration="19.245831431s" podCreationTimestamp="2025-01-29 11:01:07 +0000 UTC" firstStartedPulling="2025-01-29 11:01:23.04423772 +0000 UTC m=+27.230863345" lastFinishedPulling="2025-01-29 11:01:25.662455688 +0000 UTC m=+29.849081313" observedRunningTime="2025-01-29 11:01:26.2448233 +0000 UTC m=+30.431448925" watchObservedRunningTime="2025-01-29 11:01:26.245831431 +0000 UTC m=+30.432457056" Jan 29 11:01:27.235053 kubelet[2525]: I0129 11:01:27.234995 2525 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:01:27.512875 containerd[1433]: time="2025-01-29T11:01:27.512708223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:27.514640 containerd[1433]: time="2025-01-29T11:01:27.514538291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 11:01:27.515600 containerd[1433]: time="2025-01-29T11:01:27.515569022Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:27.517541 containerd[1433]: time="2025-01-29T11:01:27.517477368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:27.518221 containerd[1433]: time="2025-01-29T11:01:27.518182948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.544988115s" Jan 29 11:01:27.518221 containerd[1433]: time="2025-01-29T11:01:27.518217067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 11:01:27.519938 containerd[1433]: time="2025-01-29T11:01:27.519752744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:01:27.530313 containerd[1433]: time="2025-01-29T11:01:27.530262086Z" level=info msg="CreateContainer within sandbox \"364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:01:27.545109 containerd[1433]: time="2025-01-29T11:01:27.545025268Z" level=info msg="CreateContainer within sandbox \"364c160db95fcd419d8bb0a25d4ef292631333d5cdff341997d34a586fd872a4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"87838ca7aee7aafc3b0abbb2195be3133bab4de05c2b535dbfbc9ee879a8a57e\"" Jan 29 11:01:27.545929 containerd[1433]: time="2025-01-29T11:01:27.545592052Z" level=info msg="StartContainer for \"87838ca7aee7aafc3b0abbb2195be3133bab4de05c2b535dbfbc9ee879a8a57e\"" Jan 29 11:01:27.575324 systemd[1]: Started cri-containerd-87838ca7aee7aafc3b0abbb2195be3133bab4de05c2b535dbfbc9ee879a8a57e.scope - libcontainer container 87838ca7aee7aafc3b0abbb2195be3133bab4de05c2b535dbfbc9ee879a8a57e. Jan 29 11:01:27.608264 containerd[1433]: time="2025-01-29T11:01:27.608169600Z" level=info msg="StartContainer for \"87838ca7aee7aafc3b0abbb2195be3133bab4de05c2b535dbfbc9ee879a8a57e\" returns successfully" Jan 29 11:01:28.255426 kubelet[2525]: I0129 11:01:28.253202 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d55b65567-pc9f6" podStartSLOduration=17.140179936 podStartE2EDuration="21.253183731s" podCreationTimestamp="2025-01-29 11:01:07 +0000 UTC" firstStartedPulling="2025-01-29 11:01:23.40633404 +0000 UTC m=+27.592959665" lastFinishedPulling="2025-01-29 11:01:27.519337835 +0000 UTC m=+31.705963460" observedRunningTime="2025-01-29 11:01:28.252888699 +0000 UTC m=+32.439514284" watchObservedRunningTime="2025-01-29 11:01:28.253183731 +0000 UTC m=+32.439809356" Jan 29 11:01:28.658514 containerd[1433]: time="2025-01-29T11:01:28.658453732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:28.663682 containerd[1433]: time="2025-01-29T11:01:28.663618030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 11:01:28.666047 containerd[1433]: time="2025-01-29T11:01:28.666007684Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:28.671609 containerd[1433]: time="2025-01-29T11:01:28.671557731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:01:28.672944 containerd[1433]: time="2025-01-29T11:01:28.672908014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.153124951s" Jan 29 11:01:28.672991 containerd[1433]: time="2025-01-29T11:01:28.672943573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 11:01:28.676459 containerd[1433]: time="2025-01-29T11:01:28.676403718Z" level=info msg="CreateContainer within sandbox \"d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:01:28.696107 containerd[1433]: time="2025-01-29T11:01:28.695958660Z" level=info msg="CreateContainer within sandbox \"d9deee49187acc7bd06d7cda47b464c433f70420e07514901fa28d8aef48297b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"495ea17ad2c072967fbf786fb48fa7f7541a31faacb68802aca23b7cec729dd1\"" Jan 29 11:01:28.696666 containerd[1433]: time="2025-01-29T11:01:28.696635561Z" level=info msg="StartContainer for \"495ea17ad2c072967fbf786fb48fa7f7541a31faacb68802aca23b7cec729dd1\"" Jan 29 11:01:28.738398 systemd[1]: Started cri-containerd-495ea17ad2c072967fbf786fb48fa7f7541a31faacb68802aca23b7cec729dd1.scope - libcontainer container 495ea17ad2c072967fbf786fb48fa7f7541a31faacb68802aca23b7cec729dd1. Jan 29 11:01:28.797847 containerd[1433]: time="2025-01-29T11:01:28.797717978Z" level=info msg="StartContainer for \"495ea17ad2c072967fbf786fb48fa7f7541a31faacb68802aca23b7cec729dd1\" returns successfully" Jan 29 11:01:28.989371 kubelet[2525]: I0129 11:01:28.989255 2525 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:01:28.996470 kubelet[2525]: I0129 11:01:28.996433 2525 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:01:29.262922 kubelet[2525]: I0129 11:01:29.262175 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jfpvg" podStartSLOduration=16.442493693 podStartE2EDuration="22.262160107s" podCreationTimestamp="2025-01-29 11:01:07 +0000 UTC" firstStartedPulling="2025-01-29 11:01:22.854138256 +0000 UTC m=+27.040763881" lastFinishedPulling="2025-01-29 11:01:28.67380467 +0000 UTC m=+32.860430295" observedRunningTime="2025-01-29 11:01:29.260065844 +0000 UTC m=+33.446691469" watchObservedRunningTime="2025-01-29 11:01:29.262160107 +0000 UTC m=+33.448785732" Jan 29 11:01:30.137813 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:46842.service - OpenSSH per-connection server daemon (10.0.0.1:46842). Jan 29 11:01:30.212811 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 46842 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:30.214448 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:30.218885 systemd-logind[1416]: New session 9 of user core. Jan 29 11:01:30.229264 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:01:30.442179 sshd[5554]: Connection closed by 10.0.0.1 port 46842 Jan 29 11:01:30.442696 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:30.447475 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:46842.service: Deactivated successfully. Jan 29 11:01:30.450539 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:01:30.451274 systemd-logind[1416]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:01:30.452250 systemd-logind[1416]: Removed session 9. Jan 29 11:01:35.454608 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:46768.service - OpenSSH per-connection server daemon (10.0.0.1:46768). Jan 29 11:01:35.502488 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 46768 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:35.503920 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:35.508205 systemd-logind[1416]: New session 10 of user core. Jan 29 11:01:35.515338 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:01:35.755136 sshd[5580]: Connection closed by 10.0.0.1 port 46768 Jan 29 11:01:35.757206 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:35.767609 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:46768.service: Deactivated successfully. Jan 29 11:01:35.769439 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:01:35.771179 systemd-logind[1416]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:01:35.780383 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:46770.service - OpenSSH per-connection server daemon (10.0.0.1:46770). Jan 29 11:01:35.781817 systemd-logind[1416]: Removed session 10. Jan 29 11:01:35.833029 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 46770 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:35.834522 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:35.838876 systemd-logind[1416]: New session 11 of user core. Jan 29 11:01:35.845308 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:01:36.064524 sshd[5595]: Connection closed by 10.0.0.1 port 46770 Jan 29 11:01:36.065273 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:36.079135 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:46770.service: Deactivated successfully. Jan 29 11:01:36.085587 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:01:36.089430 systemd-logind[1416]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:01:36.099639 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:46780.service - OpenSSH per-connection server daemon (10.0.0.1:46780). Jan 29 11:01:36.102792 systemd-logind[1416]: Removed session 11. Jan 29 11:01:36.151451 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 46780 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:36.152959 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:36.157540 systemd-logind[1416]: New session 12 of user core. Jan 29 11:01:36.168328 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:01:36.314991 sshd[5608]: Connection closed by 10.0.0.1 port 46780 Jan 29 11:01:36.315818 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:36.319022 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:46780.service: Deactivated successfully. Jan 29 11:01:36.320681 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:01:36.323213 systemd-logind[1416]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:01:36.324365 systemd-logind[1416]: Removed session 12. Jan 29 11:01:41.334658 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:46796.service - OpenSSH per-connection server daemon (10.0.0.1:46796). Jan 29 11:01:41.403160 sshd[5627]: Accepted publickey for core from 10.0.0.1 port 46796 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:41.406514 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:41.412162 systemd-logind[1416]: New session 13 of user core. Jan 29 11:01:41.417235 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:01:41.678184 sshd[5629]: Connection closed by 10.0.0.1 port 46796 Jan 29 11:01:41.678502 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:41.688309 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:46796.service: Deactivated successfully. Jan 29 11:01:41.691646 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:01:41.693514 systemd-logind[1416]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:01:41.698859 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:46802.service - OpenSSH per-connection server daemon (10.0.0.1:46802). Jan 29 11:01:41.700123 systemd-logind[1416]: Removed session 13. Jan 29 11:01:41.752041 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 46802 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:41.753424 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:41.757532 systemd-logind[1416]: New session 14 of user core. Jan 29 11:01:41.768307 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:01:41.969862 sshd[5644]: Connection closed by 10.0.0.1 port 46802 Jan 29 11:01:41.970728 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:41.981900 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:46802.service: Deactivated successfully. Jan 29 11:01:41.984201 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:01:41.986020 systemd-logind[1416]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:01:42.000469 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:46808.service - OpenSSH per-connection server daemon (10.0.0.1:46808). Jan 29 11:01:42.001740 systemd-logind[1416]: Removed session 14. Jan 29 11:01:42.045560 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 46808 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:42.046889 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:42.051000 systemd-logind[1416]: New session 15 of user core. Jan 29 11:01:42.061268 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:01:42.825903 sshd[5657]: Connection closed by 10.0.0.1 port 46808 Jan 29 11:01:42.827049 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:42.839754 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:46808.service: Deactivated successfully. Jan 29 11:01:42.845874 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:01:42.848349 systemd-logind[1416]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:01:42.858250 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:45486.service - OpenSSH per-connection server daemon (10.0.0.1:45486). Jan 29 11:01:42.861603 systemd-logind[1416]: Removed session 15. Jan 29 11:01:42.907273 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 45486 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:42.908656 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:42.912441 systemd-logind[1416]: New session 16 of user core. Jan 29 11:01:42.921360 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:01:43.232771 sshd[5676]: Connection closed by 10.0.0.1 port 45486 Jan 29 11:01:43.233629 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:43.245749 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:45486.service: Deactivated successfully. Jan 29 11:01:43.248150 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:01:43.252064 systemd-logind[1416]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:01:43.259553 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:45502.service - OpenSSH per-connection server daemon (10.0.0.1:45502). Jan 29 11:01:43.261532 systemd-logind[1416]: Removed session 16. Jan 29 11:01:43.308571 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 45502 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:43.310289 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:43.314498 systemd-logind[1416]: New session 17 of user core. Jan 29 11:01:43.324257 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:01:43.481574 sshd[5688]: Connection closed by 10.0.0.1 port 45502 Jan 29 11:01:43.482166 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:43.485890 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:45502.service: Deactivated successfully. Jan 29 11:01:43.488303 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:01:43.489156 systemd-logind[1416]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:01:43.489873 systemd-logind[1416]: Removed session 17. Jan 29 11:01:48.493749 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:45518.service - OpenSSH per-connection server daemon (10.0.0.1:45518). Jan 29 11:01:48.540053 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 45518 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:48.541595 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:48.545626 systemd-logind[1416]: New session 18 of user core. Jan 29 11:01:48.557305 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:01:48.677466 sshd[5711]: Connection closed by 10.0.0.1 port 45518 Jan 29 11:01:48.678831 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:48.681826 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:45518.service: Deactivated successfully. Jan 29 11:01:48.684862 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:01:48.685723 systemd-logind[1416]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:01:48.688620 systemd-logind[1416]: Removed session 18. Jan 29 11:01:53.689348 systemd[1]: Started sshd@18-10.0.0.86:22-10.0.0.1:34312.service - OpenSSH per-connection server daemon (10.0.0.1:34312). Jan 29 11:01:53.744333 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 34312 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:53.745350 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:53.750108 systemd-logind[1416]: New session 19 of user core. Jan 29 11:01:53.765563 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:01:53.895602 sshd[5731]: Connection closed by 10.0.0.1 port 34312 Jan 29 11:01:53.895422 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:53.899177 systemd[1]: sshd@18-10.0.0.86:22-10.0.0.1:34312.service: Deactivated successfully. Jan 29 11:01:53.902640 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:01:53.904235 systemd-logind[1416]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:01:53.904985 systemd-logind[1416]: Removed session 19. Jan 29 11:01:54.275027 kubelet[2525]: E0129 11:01:54.274640 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:01:55.882959 containerd[1433]: time="2025-01-29T11:01:55.882376860Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:55.882959 containerd[1433]: time="2025-01-29T11:01:55.882489819Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:55.882959 containerd[1433]: time="2025-01-29T11:01:55.882500939Z" level=info msg="StopPodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:55.882959 containerd[1433]: time="2025-01-29T11:01:55.882928293Z" level=info msg="RemovePodSandbox for \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:55.883399 containerd[1433]: time="2025-01-29T11:01:55.883317568Z" level=info msg="Forcibly stopping sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\"" Jan 29 11:01:55.883399 containerd[1433]: time="2025-01-29T11:01:55.883389047Z" level=info msg="TearDown network for sandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" successfully" Jan 29 11:01:55.896253 containerd[1433]: time="2025-01-29T11:01:55.896178353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.896576 containerd[1433]: time="2025-01-29T11:01:55.896358591Z" level=info msg="RemovePodSandbox \"0cdc1a2e1342520ea773869ae75c0bfd7fdca640f5b7b46f574ddb7a3834bfca\" returns successfully" Jan 29 11:01:55.900451 containerd[1433]: time="2025-01-29T11:01:55.897477416Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:55.900451 containerd[1433]: time="2025-01-29T11:01:55.900308378Z" level=info msg="TearDown network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" successfully" Jan 29 11:01:55.900451 containerd[1433]: time="2025-01-29T11:01:55.900327817Z" level=info msg="StopPodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" returns successfully" Jan 29 11:01:55.900754 containerd[1433]: time="2025-01-29T11:01:55.900639053Z" level=info msg="RemovePodSandbox for \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:55.900754 containerd[1433]: time="2025-01-29T11:01:55.900670253Z" level=info msg="Forcibly stopping sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\"" Jan 29 11:01:55.900754 containerd[1433]: time="2025-01-29T11:01:55.900735812Z" level=info msg="TearDown network for sandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" successfully" Jan 29 11:01:55.903745 containerd[1433]: time="2025-01-29T11:01:55.903314657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.903745 containerd[1433]: time="2025-01-29T11:01:55.903368136Z" level=info msg="RemovePodSandbox \"053b21553de1a28fdbb2f3acc22bb75c20e34eff63ff127851a34562c1bf9825\" returns successfully" Jan 29 11:01:55.906235 containerd[1433]: time="2025-01-29T11:01:55.905169992Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" Jan 29 11:01:55.906235 containerd[1433]: time="2025-01-29T11:01:55.905257070Z" level=info msg="TearDown network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" successfully" Jan 29 11:01:55.906235 containerd[1433]: time="2025-01-29T11:01:55.905266790Z" level=info msg="StopPodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" returns successfully" Jan 29 11:01:55.906235 containerd[1433]: time="2025-01-29T11:01:55.905706664Z" level=info msg="RemovePodSandbox for \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" Jan 29 11:01:55.906235 containerd[1433]: time="2025-01-29T11:01:55.905728024Z" level=info msg="Forcibly stopping sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\"" Jan 29 11:01:55.906235 containerd[1433]: time="2025-01-29T11:01:55.905785703Z" level=info msg="TearDown network for sandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" successfully" Jan 29 11:01:55.910606 containerd[1433]: time="2025-01-29T11:01:55.910541999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.910877 containerd[1433]: time="2025-01-29T11:01:55.910850275Z" level=info msg="RemovePodSandbox \"7a9dc4273f084cf2988386859ac586a7be58a09a7c33733350fdabb9d98dd740\" returns successfully" Jan 29 11:01:55.914463 containerd[1433]: time="2025-01-29T11:01:55.914428186Z" level=info msg="StopPodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\"" Jan 29 11:01:55.914694 containerd[1433]: time="2025-01-29T11:01:55.914647823Z" level=info msg="TearDown network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" successfully" Jan 29 11:01:55.914694 containerd[1433]: time="2025-01-29T11:01:55.914688023Z" level=info msg="StopPodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" returns successfully" Jan 29 11:01:55.915010 containerd[1433]: time="2025-01-29T11:01:55.914982259Z" level=info msg="RemovePodSandbox for \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\"" Jan 29 11:01:55.915010 containerd[1433]: time="2025-01-29T11:01:55.915007418Z" level=info msg="Forcibly stopping sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\"" Jan 29 11:01:55.915112 containerd[1433]: time="2025-01-29T11:01:55.915062378Z" level=info msg="TearDown network for sandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" successfully" Jan 29 11:01:55.917556 containerd[1433]: time="2025-01-29T11:01:55.917526024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.917622 containerd[1433]: time="2025-01-29T11:01:55.917580264Z" level=info msg="RemovePodSandbox \"7fbf2a830dc07c39c809305d0d059a5c762e1054776262a3cd2b655f5d60c6f1\" returns successfully" Jan 29 11:01:55.917962 containerd[1433]: time="2025-01-29T11:01:55.917920019Z" level=info msg="StopPodSandbox for \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\"" Jan 29 11:01:55.918174 containerd[1433]: time="2025-01-29T11:01:55.918026138Z" level=info msg="TearDown network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\" successfully" Jan 29 11:01:55.918174 containerd[1433]: time="2025-01-29T11:01:55.918040537Z" level=info msg="StopPodSandbox for \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\" returns successfully" Jan 29 11:01:55.918508 containerd[1433]: time="2025-01-29T11:01:55.918459532Z" level=info msg="RemovePodSandbox for \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\"" Jan 29 11:01:55.918508 containerd[1433]: time="2025-01-29T11:01:55.918481091Z" level=info msg="Forcibly stopping sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\"" Jan 29 11:01:55.918608 containerd[1433]: time="2025-01-29T11:01:55.918550290Z" level=info msg="TearDown network for sandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\" successfully" Jan 29 11:01:55.921021 containerd[1433]: time="2025-01-29T11:01:55.920986017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.921107 containerd[1433]: time="2025-01-29T11:01:55.921067696Z" level=info msg="RemovePodSandbox \"33a79477019c32f827f8f90546761b292b26d0790d526901d6d3b46dd6a4c1c8\" returns successfully" Jan 29 11:01:55.921473 containerd[1433]: time="2025-01-29T11:01:55.921422691Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:55.921537 containerd[1433]: time="2025-01-29T11:01:55.921526730Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:55.921586 containerd[1433]: time="2025-01-29T11:01:55.921538610Z" level=info msg="StopPodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:55.921860 containerd[1433]: time="2025-01-29T11:01:55.921830686Z" level=info msg="RemovePodSandbox for \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:55.922054 containerd[1433]: time="2025-01-29T11:01:55.921860606Z" level=info msg="Forcibly stopping sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\"" Jan 29 11:01:55.922054 containerd[1433]: time="2025-01-29T11:01:55.921930885Z" level=info msg="TearDown network for sandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" successfully" Jan 29 11:01:55.930526 containerd[1433]: time="2025-01-29T11:01:55.930487409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.930617 containerd[1433]: time="2025-01-29T11:01:55.930557568Z" level=info msg="RemovePodSandbox \"3841e084053bbd4060ed26686a59c219211ccc4d5c0d63910e483d7267db09bf\" returns successfully" Jan 29 11:01:55.931059 containerd[1433]: time="2025-01-29T11:01:55.931028041Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:55.931160 containerd[1433]: time="2025-01-29T11:01:55.931134800Z" level=info msg="TearDown network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" successfully" Jan 29 11:01:55.931160 containerd[1433]: time="2025-01-29T11:01:55.931145360Z" level=info msg="StopPodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" returns successfully" Jan 29 11:01:55.931448 containerd[1433]: time="2025-01-29T11:01:55.931390196Z" level=info msg="RemovePodSandbox for \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:55.931448 containerd[1433]: time="2025-01-29T11:01:55.931415636Z" level=info msg="Forcibly stopping sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\"" Jan 29 11:01:55.931519 containerd[1433]: time="2025-01-29T11:01:55.931470755Z" level=info msg="TearDown network for sandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" successfully" Jan 29 11:01:55.933995 containerd[1433]: time="2025-01-29T11:01:55.933962922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.934121 containerd[1433]: time="2025-01-29T11:01:55.934020681Z" level=info msg="RemovePodSandbox \"b9e25bbf47b638222e293805a32e9d91e042e0ac10b98f2c12407d5152e4cb4b\" returns successfully" Jan 29 11:01:55.934367 containerd[1433]: time="2025-01-29T11:01:55.934342396Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" Jan 29 11:01:55.934528 containerd[1433]: time="2025-01-29T11:01:55.934435115Z" level=info msg="TearDown network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" successfully" Jan 29 11:01:55.934528 containerd[1433]: time="2025-01-29T11:01:55.934445555Z" level=info msg="StopPodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" returns successfully" Jan 29 11:01:55.935038 containerd[1433]: time="2025-01-29T11:01:55.934985188Z" level=info msg="RemovePodSandbox for \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" Jan 29 11:01:55.935038 containerd[1433]: time="2025-01-29T11:01:55.935010267Z" level=info msg="Forcibly stopping sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\"" Jan 29 11:01:55.935211 containerd[1433]: time="2025-01-29T11:01:55.935064427Z" level=info msg="TearDown network for sandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" successfully" Jan 29 11:01:55.949099 containerd[1433]: time="2025-01-29T11:01:55.948842400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.949099 containerd[1433]: time="2025-01-29T11:01:55.948923919Z" level=info msg="RemovePodSandbox \"c2f97e92ab2e2aa6d3dd62f12056a8a46dc3d7a6a4ecac68af2a0f3d9e1a904f\" returns successfully" Jan 29 11:01:55.949529 containerd[1433]: time="2025-01-29T11:01:55.949456952Z" level=info msg="StopPodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\"" Jan 29 11:01:55.949575 containerd[1433]: time="2025-01-29T11:01:55.949557910Z" level=info msg="TearDown network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" successfully" Jan 29 11:01:55.949575 containerd[1433]: time="2025-01-29T11:01:55.949568630Z" level=info msg="StopPodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" returns successfully" Jan 29 11:01:55.950129 containerd[1433]: time="2025-01-29T11:01:55.949889826Z" level=info msg="RemovePodSandbox for \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\"" Jan 29 11:01:55.950129 containerd[1433]: time="2025-01-29T11:01:55.949917745Z" level=info msg="Forcibly stopping sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\"" Jan 29 11:01:55.950129 containerd[1433]: time="2025-01-29T11:01:55.949972705Z" level=info msg="TearDown network for sandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" successfully" Jan 29 11:01:55.952976 containerd[1433]: time="2025-01-29T11:01:55.952918905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.953066 containerd[1433]: time="2025-01-29T11:01:55.952989304Z" level=info msg="RemovePodSandbox \"7472dc63f6b7694bdfd41fe070ee474b2dafa5b7dbbac1359a3c6113fd752454\" returns successfully" Jan 29 11:01:55.953406 containerd[1433]: time="2025-01-29T11:01:55.953383459Z" level=info msg="StopPodSandbox for \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\"" Jan 29 11:01:55.953653 containerd[1433]: time="2025-01-29T11:01:55.953577096Z" level=info msg="TearDown network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\" successfully" Jan 29 11:01:55.953653 containerd[1433]: time="2025-01-29T11:01:55.953593936Z" level=info msg="StopPodSandbox for \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\" returns successfully" Jan 29 11:01:55.953942 containerd[1433]: time="2025-01-29T11:01:55.953847092Z" level=info msg="RemovePodSandbox for \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\"" Jan 29 11:01:55.953942 containerd[1433]: time="2025-01-29T11:01:55.953880692Z" level=info msg="Forcibly stopping sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\"" Jan 29 11:01:55.954004 containerd[1433]: time="2025-01-29T11:01:55.953956051Z" level=info msg="TearDown network for sandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\" successfully" Jan 29 11:01:55.956406 containerd[1433]: time="2025-01-29T11:01:55.956338058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.956406 containerd[1433]: time="2025-01-29T11:01:55.956392338Z" level=info msg="RemovePodSandbox \"8e1a81c48d8057b1d4f5908ee8fc1a4b88a4b3f5ec43af6adc4cd1effd70f561\" returns successfully" Jan 29 11:01:55.956716 containerd[1433]: time="2025-01-29T11:01:55.956695294Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:55.956825 containerd[1433]: time="2025-01-29T11:01:55.956807012Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:55.956852 containerd[1433]: time="2025-01-29T11:01:55.956824772Z" level=info msg="StopPodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:55.957181 containerd[1433]: time="2025-01-29T11:01:55.957138088Z" level=info msg="RemovePodSandbox for \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:55.957181 containerd[1433]: time="2025-01-29T11:01:55.957167207Z" level=info msg="Forcibly stopping sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\"" Jan 29 11:01:55.957181 containerd[1433]: time="2025-01-29T11:01:55.957230126Z" level=info msg="TearDown network for sandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" successfully" Jan 29 11:01:55.959351 containerd[1433]: time="2025-01-29T11:01:55.959317818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.959426 containerd[1433]: time="2025-01-29T11:01:55.959380017Z" level=info msg="RemovePodSandbox \"14bc9b00d4fc9571f80d61b96c6e77badc32abcc8b151b30b50de73651536be9\" returns successfully" Jan 29 11:01:55.959755 containerd[1433]: time="2025-01-29T11:01:55.959705013Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:55.959846 containerd[1433]: time="2025-01-29T11:01:55.959796972Z" level=info msg="TearDown network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" successfully" Jan 29 11:01:55.959846 containerd[1433]: time="2025-01-29T11:01:55.959810731Z" level=info msg="StopPodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" returns successfully" Jan 29 11:01:55.961740 containerd[1433]: time="2025-01-29T11:01:55.960247086Z" level=info msg="RemovePodSandbox for \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:55.961740 containerd[1433]: time="2025-01-29T11:01:55.960278925Z" level=info msg="Forcibly stopping sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\"" Jan 29 11:01:55.961740 containerd[1433]: time="2025-01-29T11:01:55.960344564Z" level=info msg="TearDown network for sandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" successfully" Jan 29 11:01:55.965007 containerd[1433]: time="2025-01-29T11:01:55.964967582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.965187 containerd[1433]: time="2025-01-29T11:01:55.965168299Z" level=info msg="RemovePodSandbox \"6e0b732e7f74360c2227f37e836de5f44ade7563a96f8a0c0cc023d00934d527\" returns successfully" Jan 29 11:01:55.966478 containerd[1433]: time="2025-01-29T11:01:55.966449442Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" Jan 29 11:01:55.966614 containerd[1433]: time="2025-01-29T11:01:55.966559200Z" level=info msg="TearDown network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" successfully" Jan 29 11:01:55.966614 containerd[1433]: time="2025-01-29T11:01:55.966569280Z" level=info msg="StopPodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" returns successfully" Jan 29 11:01:55.966975 containerd[1433]: time="2025-01-29T11:01:55.966948395Z" level=info msg="RemovePodSandbox for \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" Jan 29 11:01:55.966975 containerd[1433]: time="2025-01-29T11:01:55.966976514Z" level=info msg="Forcibly stopping sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\"" Jan 29 11:01:55.967060 containerd[1433]: time="2025-01-29T11:01:55.967039474Z" level=info msg="TearDown network for sandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" successfully" Jan 29 11:01:55.969464 containerd[1433]: time="2025-01-29T11:01:55.969413241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.969568 containerd[1433]: time="2025-01-29T11:01:55.969473041Z" level=info msg="RemovePodSandbox \"18f88cf55494da09bfff5ddc0b36f60ec8bf66b51d5a343fda08c087cae222c0\" returns successfully" Jan 29 11:01:55.969921 containerd[1433]: time="2025-01-29T11:01:55.969893275Z" level=info msg="StopPodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\"" Jan 29 11:01:55.970002 containerd[1433]: time="2025-01-29T11:01:55.969985914Z" level=info msg="TearDown network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" successfully" Jan 29 11:01:55.970002 containerd[1433]: time="2025-01-29T11:01:55.969999913Z" level=info msg="StopPodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" returns successfully" Jan 29 11:01:55.970231 containerd[1433]: time="2025-01-29T11:01:55.970207671Z" level=info msg="RemovePodSandbox for \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\"" Jan 29 11:01:55.971466 containerd[1433]: time="2025-01-29T11:01:55.970353669Z" level=info msg="Forcibly stopping sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\"" Jan 29 11:01:55.971466 containerd[1433]: time="2025-01-29T11:01:55.970421268Z" level=info msg="TearDown network for sandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" successfully" Jan 29 11:01:55.972766 containerd[1433]: time="2025-01-29T11:01:55.972733636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.972922 containerd[1433]: time="2025-01-29T11:01:55.972903034Z" level=info msg="RemovePodSandbox \"f5c8d71cbca4af6f9421ecd5501058194a63b9ec5266ba67aa914266a955d2ac\" returns successfully" Jan 29 11:01:55.973501 containerd[1433]: time="2025-01-29T11:01:55.973469306Z" level=info msg="StopPodSandbox for \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\"" Jan 29 11:01:55.973581 containerd[1433]: time="2025-01-29T11:01:55.973564145Z" level=info msg="TearDown network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\" successfully" Jan 29 11:01:55.973581 containerd[1433]: time="2025-01-29T11:01:55.973578905Z" level=info msg="StopPodSandbox for \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\" returns successfully" Jan 29 11:01:55.973890 containerd[1433]: time="2025-01-29T11:01:55.973868461Z" level=info msg="RemovePodSandbox for \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\"" Jan 29 11:01:55.973930 containerd[1433]: time="2025-01-29T11:01:55.973893661Z" level=info msg="Forcibly stopping sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\"" Jan 29 11:01:55.973967 containerd[1433]: time="2025-01-29T11:01:55.973959100Z" level=info msg="TearDown network for sandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\" successfully" Jan 29 11:01:55.980429 containerd[1433]: time="2025-01-29T11:01:55.980360573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.980502 containerd[1433]: time="2025-01-29T11:01:55.980435612Z" level=info msg="RemovePodSandbox \"33b022103de56595077a5de2960ef18aee4881f082859756d8706f7c497fde0a\" returns successfully" Jan 29 11:01:55.981011 containerd[1433]: time="2025-01-29T11:01:55.980827927Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:55.981011 containerd[1433]: time="2025-01-29T11:01:55.980941965Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:55.981011 containerd[1433]: time="2025-01-29T11:01:55.980953365Z" level=info msg="StopPodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:55.981283 containerd[1433]: time="2025-01-29T11:01:55.981193922Z" level=info msg="RemovePodSandbox for \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:55.981283 containerd[1433]: time="2025-01-29T11:01:55.981221361Z" level=info msg="Forcibly stopping sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\"" Jan 29 11:01:55.981365 containerd[1433]: time="2025-01-29T11:01:55.981285001Z" level=info msg="TearDown network for sandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" successfully" Jan 29 11:01:55.983988 containerd[1433]: time="2025-01-29T11:01:55.983934885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.984067 containerd[1433]: time="2025-01-29T11:01:55.983994684Z" level=info msg="RemovePodSandbox \"6b8232c70827ab565f775b9d30ae993425b59b37bf8f3078ebfa4cddcc5b41cc\" returns successfully" Jan 29 11:01:55.984371 containerd[1433]: time="2025-01-29T11:01:55.984347919Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:55.984453 containerd[1433]: time="2025-01-29T11:01:55.984438678Z" level=info msg="TearDown network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" successfully" Jan 29 11:01:55.984485 containerd[1433]: time="2025-01-29T11:01:55.984451558Z" level=info msg="StopPodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" returns successfully" Jan 29 11:01:55.984763 containerd[1433]: time="2025-01-29T11:01:55.984713754Z" level=info msg="RemovePodSandbox for \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:55.984763 containerd[1433]: time="2025-01-29T11:01:55.984738874Z" level=info msg="Forcibly stopping sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\"" Jan 29 11:01:55.984831 containerd[1433]: time="2025-01-29T11:01:55.984799153Z" level=info msg="TearDown network for sandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" successfully" Jan 29 11:01:55.987292 containerd[1433]: time="2025-01-29T11:01:55.987254880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.987351 containerd[1433]: time="2025-01-29T11:01:55.987314679Z" level=info msg="RemovePodSandbox \"01f12646a998e5fe39d261e45647b1c1abcbd65259a6dd3951f0e0edffe05e5d\" returns successfully" Jan 29 11:01:55.987660 containerd[1433]: time="2025-01-29T11:01:55.987640434Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" Jan 29 11:01:55.987740 containerd[1433]: time="2025-01-29T11:01:55.987724913Z" level=info msg="TearDown network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" successfully" Jan 29 11:01:55.987740 containerd[1433]: time="2025-01-29T11:01:55.987738913Z" level=info msg="StopPodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" returns successfully" Jan 29 11:01:55.988093 containerd[1433]: time="2025-01-29T11:01:55.988049509Z" level=info msg="RemovePodSandbox for \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" Jan 29 11:01:55.991952 containerd[1433]: time="2025-01-29T11:01:55.988214707Z" level=info msg="Forcibly stopping sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\"" Jan 29 11:01:55.991952 containerd[1433]: time="2025-01-29T11:01:55.988314425Z" level=info msg="TearDown network for sandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" successfully" Jan 29 11:01:55.993574 containerd[1433]: time="2025-01-29T11:01:55.993535515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.993724 containerd[1433]: time="2025-01-29T11:01:55.993703232Z" level=info msg="RemovePodSandbox \"6461ef9545a9563b8ebfcf56b5ee8c6ef78d349b7ccd0d3ec300a0b0969b1828\" returns successfully" Jan 29 11:01:55.994218 containerd[1433]: time="2025-01-29T11:01:55.994193946Z" level=info msg="StopPodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\"" Jan 29 11:01:55.994320 containerd[1433]: time="2025-01-29T11:01:55.994303584Z" level=info msg="TearDown network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" successfully" Jan 29 11:01:55.994320 containerd[1433]: time="2025-01-29T11:01:55.994317344Z" level=info msg="StopPodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" returns successfully" Jan 29 11:01:55.994586 containerd[1433]: time="2025-01-29T11:01:55.994563061Z" level=info msg="RemovePodSandbox for \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\"" Jan 29 11:01:55.994649 containerd[1433]: time="2025-01-29T11:01:55.994586420Z" level=info msg="Forcibly stopping sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\"" Jan 29 11:01:55.994649 containerd[1433]: time="2025-01-29T11:01:55.994645820Z" level=info msg="TearDown network for sandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" successfully" Jan 29 11:01:55.996959 containerd[1433]: time="2025-01-29T11:01:55.996911149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:55.997016 containerd[1433]: time="2025-01-29T11:01:55.996977668Z" level=info msg="RemovePodSandbox \"d1b3cee992acbd881f36df15e75d503bde10548e5e8438561d6ed806c2ae0627\" returns successfully" Jan 29 11:01:55.997481 containerd[1433]: time="2025-01-29T11:01:55.997328463Z" level=info msg="StopPodSandbox for \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\"" Jan 29 11:01:55.997481 containerd[1433]: time="2025-01-29T11:01:55.997417102Z" level=info msg="TearDown network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\" successfully" Jan 29 11:01:55.997481 containerd[1433]: time="2025-01-29T11:01:55.997426182Z" level=info msg="StopPodSandbox for \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\" returns successfully" Jan 29 11:01:55.997637 containerd[1433]: time="2025-01-29T11:01:55.997616139Z" level=info msg="RemovePodSandbox for \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\"" Jan 29 11:01:55.997674 containerd[1433]: time="2025-01-29T11:01:55.997639619Z" level=info msg="Forcibly stopping sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\"" Jan 29 11:01:55.997714 containerd[1433]: time="2025-01-29T11:01:55.997700538Z" level=info msg="TearDown network for sandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\" successfully" Jan 29 11:01:56.000047 containerd[1433]: time="2025-01-29T11:01:56.000010867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.000128 containerd[1433]: time="2025-01-29T11:01:56.000072026Z" level=info msg="RemovePodSandbox \"0a0990a2d87b07533cc0b6ac7b607a0798e46d676706408f231ef00fc73474e8\" returns successfully" Jan 29 11:01:56.000417 containerd[1433]: time="2025-01-29T11:01:56.000391062Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:56.000496 containerd[1433]: time="2025-01-29T11:01:56.000482260Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:56.000558 containerd[1433]: time="2025-01-29T11:01:56.000495420Z" level=info msg="StopPodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:56.000717 containerd[1433]: time="2025-01-29T11:01:56.000691378Z" level=info msg="RemovePodSandbox for \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:56.000752 containerd[1433]: time="2025-01-29T11:01:56.000716057Z" level=info msg="Forcibly stopping sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\"" Jan 29 11:01:56.000805 containerd[1433]: time="2025-01-29T11:01:56.000791136Z" level=info msg="TearDown network for sandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" successfully" Jan 29 11:01:56.003821 containerd[1433]: time="2025-01-29T11:01:56.003496660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.003821 containerd[1433]: time="2025-01-29T11:01:56.003552940Z" level=info msg="RemovePodSandbox \"aafe0db3f38a23a389b6a0492d089293cbec7abd24493b8ca791b45bda476fe8\" returns successfully" Jan 29 11:01:56.003925 containerd[1433]: time="2025-01-29T11:01:56.003851936Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:56.003957 containerd[1433]: time="2025-01-29T11:01:56.003948054Z" level=info msg="TearDown network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" successfully" Jan 29 11:01:56.003978 containerd[1433]: time="2025-01-29T11:01:56.003958374Z" level=info msg="StopPodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" returns successfully" Jan 29 11:01:56.004313 containerd[1433]: time="2025-01-29T11:01:56.004290130Z" level=info msg="RemovePodSandbox for \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:56.004313 containerd[1433]: time="2025-01-29T11:01:56.004314610Z" level=info msg="Forcibly stopping sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\"" Jan 29 11:01:56.004463 containerd[1433]: time="2025-01-29T11:01:56.004378049Z" level=info msg="TearDown network for sandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" successfully" Jan 29 11:01:56.006946 containerd[1433]: time="2025-01-29T11:01:56.006897775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.007060 containerd[1433]: time="2025-01-29T11:01:56.006972934Z" level=info msg="RemovePodSandbox \"b46a50fec323b044659a346af16a28506ecbdfecf9950a3204e6c994114fabbe\" returns successfully" Jan 29 11:01:56.007412 containerd[1433]: time="2025-01-29T11:01:56.007387809Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" Jan 29 11:01:56.007498 containerd[1433]: time="2025-01-29T11:01:56.007482768Z" level=info msg="TearDown network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" successfully" Jan 29 11:01:56.007533 containerd[1433]: time="2025-01-29T11:01:56.007498047Z" level=info msg="StopPodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" returns successfully" Jan 29 11:01:56.007735 containerd[1433]: time="2025-01-29T11:01:56.007704645Z" level=info msg="RemovePodSandbox for \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" Jan 29 11:01:56.007735 containerd[1433]: time="2025-01-29T11:01:56.007729044Z" level=info msg="Forcibly stopping sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\"" Jan 29 11:01:56.007811 containerd[1433]: time="2025-01-29T11:01:56.007793164Z" level=info msg="TearDown network for sandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" successfully" Jan 29 11:01:56.010981 containerd[1433]: time="2025-01-29T11:01:56.010933082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.011031 containerd[1433]: time="2025-01-29T11:01:56.011000961Z" level=info msg="RemovePodSandbox \"99d85da93f9c7dc136171646006f37592c16a670234992814fa07ad684e12e58\" returns successfully" Jan 29 11:01:56.011357 containerd[1433]: time="2025-01-29T11:01:56.011329237Z" level=info msg="StopPodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\"" Jan 29 11:01:56.011436 containerd[1433]: time="2025-01-29T11:01:56.011420276Z" level=info msg="TearDown network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" successfully" Jan 29 11:01:56.011436 containerd[1433]: time="2025-01-29T11:01:56.011433475Z" level=info msg="StopPodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" returns successfully" Jan 29 11:01:56.012886 containerd[1433]: time="2025-01-29T11:01:56.011686912Z" level=info msg="RemovePodSandbox for \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\"" Jan 29 11:01:56.012886 containerd[1433]: time="2025-01-29T11:01:56.011715352Z" level=info msg="Forcibly stopping sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\"" Jan 29 11:01:56.012886 containerd[1433]: time="2025-01-29T11:01:56.011779631Z" level=info msg="TearDown network for sandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" successfully" Jan 29 11:01:56.014430 containerd[1433]: time="2025-01-29T11:01:56.014397116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.014545 containerd[1433]: time="2025-01-29T11:01:56.014528715Z" level=info msg="RemovePodSandbox \"0fce31cb56de084348b9d69ca0393a02b7c453c69f36ed3a9068fd1c5442c16f\" returns successfully" Jan 29 11:01:56.014949 containerd[1433]: time="2025-01-29T11:01:56.014924829Z" level=info msg="StopPodSandbox for \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\"" Jan 29 11:01:56.015048 containerd[1433]: time="2025-01-29T11:01:56.015018748Z" level=info msg="TearDown network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\" successfully" Jan 29 11:01:56.015048 containerd[1433]: time="2025-01-29T11:01:56.015032988Z" level=info msg="StopPodSandbox for \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\" returns successfully" Jan 29 11:01:56.016099 containerd[1433]: time="2025-01-29T11:01:56.015271905Z" level=info msg="RemovePodSandbox for \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\"" Jan 29 11:01:56.016099 containerd[1433]: time="2025-01-29T11:01:56.015300904Z" level=info msg="Forcibly stopping sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\"" Jan 29 11:01:56.016099 containerd[1433]: time="2025-01-29T11:01:56.015366423Z" level=info msg="TearDown network for sandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\" successfully" Jan 29 11:01:56.017946 containerd[1433]: time="2025-01-29T11:01:56.017907710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.018002 containerd[1433]: time="2025-01-29T11:01:56.017972709Z" level=info msg="RemovePodSandbox \"6a7259f3f1e5b5771d2409342c010121b1c475045c44152668c661e9e31f85bb\" returns successfully" Jan 29 11:01:56.018348 containerd[1433]: time="2025-01-29T11:01:56.018323704Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:56.018435 containerd[1433]: time="2025-01-29T11:01:56.018422463Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:56.018461 containerd[1433]: time="2025-01-29T11:01:56.018435023Z" level=info msg="StopPodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:56.018836 containerd[1433]: time="2025-01-29T11:01:56.018809858Z" level=info msg="RemovePodSandbox for \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:56.018871 containerd[1433]: time="2025-01-29T11:01:56.018846897Z" level=info msg="Forcibly stopping sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\"" Jan 29 11:01:56.018934 containerd[1433]: time="2025-01-29T11:01:56.018921256Z" level=info msg="TearDown network for sandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" successfully" Jan 29 11:01:56.022340 containerd[1433]: time="2025-01-29T11:01:56.022299132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.022408 containerd[1433]: time="2025-01-29T11:01:56.022363811Z" level=info msg="RemovePodSandbox \"5398096d9b41d23813f71600ca53440e6406642fe72eda686abb4c64bd8ae0e9\" returns successfully" Jan 29 11:01:56.022772 containerd[1433]: time="2025-01-29T11:01:56.022744606Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:56.022876 containerd[1433]: time="2025-01-29T11:01:56.022849364Z" level=info msg="TearDown network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" successfully" Jan 29 11:01:56.022907 containerd[1433]: time="2025-01-29T11:01:56.022874124Z" level=info msg="StopPodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" returns successfully" Jan 29 11:01:56.024107 containerd[1433]: time="2025-01-29T11:01:56.023289159Z" level=info msg="RemovePodSandbox for \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:56.024107 containerd[1433]: time="2025-01-29T11:01:56.023321078Z" level=info msg="Forcibly stopping sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\"" Jan 29 11:01:56.024107 containerd[1433]: time="2025-01-29T11:01:56.023384917Z" level=info msg="TearDown network for sandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" successfully" Jan 29 11:01:56.025881 containerd[1433]: time="2025-01-29T11:01:56.025837245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.025963 containerd[1433]: time="2025-01-29T11:01:56.025909084Z" level=info msg="RemovePodSandbox \"2a7cade2f1b1e615681103906c7a55eb10c380a663839247d8ef67d347bc9ac3\" returns successfully" Jan 29 11:01:56.026333 containerd[1433]: time="2025-01-29T11:01:56.026303359Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" Jan 29 11:01:56.026424 containerd[1433]: time="2025-01-29T11:01:56.026401557Z" level=info msg="TearDown network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" successfully" Jan 29 11:01:56.026424 containerd[1433]: time="2025-01-29T11:01:56.026420957Z" level=info msg="StopPodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" returns successfully" Jan 29 11:01:56.027122 containerd[1433]: time="2025-01-29T11:01:56.026830912Z" level=info msg="RemovePodSandbox for \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" Jan 29 11:01:56.027122 containerd[1433]: time="2025-01-29T11:01:56.026869351Z" level=info msg="Forcibly stopping sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\"" Jan 29 11:01:56.027122 containerd[1433]: time="2025-01-29T11:01:56.026935190Z" level=info msg="TearDown network for sandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" successfully" Jan 29 11:01:56.029384 containerd[1433]: time="2025-01-29T11:01:56.029347359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.029444 containerd[1433]: time="2025-01-29T11:01:56.029406678Z" level=info msg="RemovePodSandbox \"c76d32c196fd691ed3159cbb91c4bd752c18110463e9acc4474309162c90dde9\" returns successfully" Jan 29 11:01:56.029979 containerd[1433]: time="2025-01-29T11:01:56.029809592Z" level=info msg="StopPodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\"" Jan 29 11:01:56.029979 containerd[1433]: time="2025-01-29T11:01:56.029910031Z" level=info msg="TearDown network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" successfully" Jan 29 11:01:56.029979 containerd[1433]: time="2025-01-29T11:01:56.029919471Z" level=info msg="StopPodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" returns successfully" Jan 29 11:01:56.030345 containerd[1433]: time="2025-01-29T11:01:56.030319746Z" level=info msg="RemovePodSandbox for \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\"" Jan 29 11:01:56.030376 containerd[1433]: time="2025-01-29T11:01:56.030350345Z" level=info msg="Forcibly stopping sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\"" Jan 29 11:01:56.030431 containerd[1433]: time="2025-01-29T11:01:56.030418104Z" level=info msg="TearDown network for sandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" successfully" Jan 29 11:01:56.032848 containerd[1433]: time="2025-01-29T11:01:56.032812313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.032919 containerd[1433]: time="2025-01-29T11:01:56.032879712Z" level=info msg="RemovePodSandbox \"5b6c54e92cfefe818f7a8ebf714c7e9f914f6ca76bf4a365d440fde7b3e29055\" returns successfully" Jan 29 11:01:56.033318 containerd[1433]: time="2025-01-29T11:01:56.033292986Z" level=info msg="StopPodSandbox for \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\"" Jan 29 11:01:56.033400 containerd[1433]: time="2025-01-29T11:01:56.033385865Z" level=info msg="TearDown network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\" successfully" Jan 29 11:01:56.033426 containerd[1433]: time="2025-01-29T11:01:56.033399425Z" level=info msg="StopPodSandbox for \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\" returns successfully" Jan 29 11:01:56.034110 containerd[1433]: time="2025-01-29T11:01:56.033706541Z" level=info msg="RemovePodSandbox for \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\"" Jan 29 11:01:56.034110 containerd[1433]: time="2025-01-29T11:01:56.033732781Z" level=info msg="Forcibly stopping sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\"" Jan 29 11:01:56.034110 containerd[1433]: time="2025-01-29T11:01:56.033799140Z" level=info msg="TearDown network for sandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\" successfully" Jan 29 11:01:56.036165 containerd[1433]: time="2025-01-29T11:01:56.036120309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:01:56.036225 containerd[1433]: time="2025-01-29T11:01:56.036184188Z" level=info msg="RemovePodSandbox \"b1a21505a1103da9c44ad9a37a8f30cf92829a9993bbd6eee0db3e5fa6b461b9\" returns successfully" Jan 29 11:01:58.916282 systemd[1]: Started sshd@19-10.0.0.86:22-10.0.0.1:34326.service - OpenSSH per-connection server daemon (10.0.0.1:34326). Jan 29 11:01:58.969402 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 34326 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:01:58.971146 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:01:58.978113 systemd-logind[1416]: New session 20 of user core. Jan 29 11:01:58.985242 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:01:59.128690 sshd[5792]: Connection closed by 10.0.0.1 port 34326 Jan 29 11:01:59.129331 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Jan 29 11:01:59.132060 systemd-logind[1416]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:01:59.132309 systemd[1]: sshd@19-10.0.0.86:22-10.0.0.1:34326.service: Deactivated successfully. Jan 29 11:01:59.133896 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:01:59.135327 systemd-logind[1416]: Removed session 20. Jan 29 11:02:04.159428 systemd[1]: Started sshd@20-10.0.0.86:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Jan 29 11:02:04.200770 sshd[5814]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:02:04.201735 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:02:04.211376 systemd-logind[1416]: New session 21 of user core. Jan 29 11:02:04.223303 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:02:04.394878 sshd[5816]: Connection closed by 10.0.0.1 port 54742 Jan 29 11:02:04.395646 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Jan 29 11:02:04.398889 systemd[1]: sshd@20-10.0.0.86:22-10.0.0.1:54742.service: Deactivated successfully. Jan 29 11:02:04.400690 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:02:04.402678 systemd-logind[1416]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:02:04.403735 systemd-logind[1416]: Removed session 21. Jan 29 11:02:05.895914 kubelet[2525]: E0129 11:02:05.895799 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"