Apr 30 00:10:57.902139 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:10:57.902160 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Tue Apr 29 22:24:03 -00 2025 Apr 30 00:10:57.902170 kernel: KASLR enabled Apr 30 00:10:57.902175 kernel: efi: EFI v2.7 by EDK II Apr 30 00:10:57.902181 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Apr 30 00:10:57.902186 kernel: random: crng init done Apr 30 00:10:57.902193 kernel: secureboot: Secure boot disabled Apr 30 00:10:57.902199 kernel: ACPI: Early table checksum verification disabled Apr 30 00:10:57.902205 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Apr 30 00:10:57.902213 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:10:57.902218 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902224 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902230 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902236 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902243 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902251 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902257 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902263 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902269 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:10:57.902275 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 30 00:10:57.902282 kernel: NUMA: Failed to initialise from firmware Apr 30 00:10:57.902288 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:10:57.902294 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Apr 30 00:10:57.902300 kernel: Zone ranges: Apr 30 00:10:57.902306 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:10:57.902313 kernel: DMA32 empty Apr 30 00:10:57.902319 kernel: Normal empty Apr 30 00:10:57.902325 kernel: Movable zone start for each node Apr 30 00:10:57.902332 kernel: Early memory node ranges Apr 30 00:10:57.902338 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Apr 30 00:10:57.902344 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Apr 30 00:10:57.902350 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Apr 30 00:10:57.902356 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Apr 30 00:10:57.902362 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Apr 30 00:10:57.902368 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Apr 30 00:10:57.902374 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 30 00:10:57.902381 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:10:57.902388 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 30 00:10:57.902394 kernel: psci: probing for conduit method from ACPI. Apr 30 00:10:57.902400 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:10:57.902409 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:10:57.902415 kernel: psci: Trusted OS migration not required Apr 30 00:10:57.902422 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:10:57.902429 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:10:57.902436 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:10:57.902442 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:10:57.902449 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 30 00:10:57.902456 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:10:57.902462 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:10:57.902469 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:10:57.902475 kernel: CPU features: detected: Spectre-v4 Apr 30 00:10:57.902481 kernel: CPU features: detected: Spectre-BHB Apr 30 00:10:57.902488 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:10:57.902496 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:10:57.902502 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:10:57.902509 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:10:57.902516 kernel: alternatives: applying boot alternatives Apr 30 00:10:57.902524 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 30 00:10:57.902531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:10:57.902537 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:10:57.902544 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:10:57.902551 kernel: Fallback order for Node 0: 0 Apr 30 00:10:57.902557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 30 00:10:57.902576 kernel: Policy zone: DMA Apr 30 00:10:57.902584 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:10:57.902590 kernel: software IO TLB: area num 4. Apr 30 00:10:57.902597 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Apr 30 00:10:57.902604 kernel: Memory: 2386204K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186084K reserved, 0K cma-reserved) Apr 30 00:10:57.902611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:10:57.902617 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:10:57.902625 kernel: rcu: RCU event tracing is enabled. Apr 30 00:10:57.902631 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:10:57.902638 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:10:57.902645 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:10:57.902652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:10:57.902658 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:10:57.902666 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:10:57.902673 kernel: GICv3: 256 SPIs implemented Apr 30 00:10:57.902691 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:10:57.902698 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:10:57.902705 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:10:57.902712 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:10:57.902718 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:10:57.902725 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:10:57.902732 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:10:57.902745 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 30 00:10:57.902752 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 30 00:10:57.902761 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:10:57.902768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:10:57.902775 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:10:57.902782 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:10:57.902789 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:10:57.902796 kernel: arm-pv: using stolen time PV Apr 30 00:10:57.902803 kernel: Console: colour dummy device 80x25 Apr 30 00:10:57.902809 kernel: ACPI: Core revision 20230628 Apr 30 00:10:57.902817 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:10:57.902823 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:10:57.902831 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:10:57.902838 kernel: landlock: Up and running. Apr 30 00:10:57.902844 kernel: SELinux: Initializing. Apr 30 00:10:57.902851 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:10:57.902858 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:10:57.902865 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:10:57.902871 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:10:57.902878 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:10:57.902885 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:10:57.902893 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:10:57.902899 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:10:57.902906 kernel: Remapping and enabling EFI services. Apr 30 00:10:57.902913 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:10:57.902920 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:10:57.902926 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:10:57.902933 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 30 00:10:57.902940 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:10:57.902947 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:10:57.902953 kernel: Detected PIPT I-cache on CPU2 Apr 30 00:10:57.902961 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 30 00:10:57.902968 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 30 00:10:57.902980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:10:57.902988 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 30 00:10:57.902995 kernel: Detected PIPT I-cache on CPU3 Apr 30 00:10:57.903002 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 30 00:10:57.903009 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 30 00:10:57.903016 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:10:57.903024 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 30 00:10:57.903032 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:10:57.903039 kernel: SMP: Total of 4 processors activated. Apr 30 00:10:57.903046 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:10:57.903053 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:10:57.903061 kernel: CPU features: detected: Common not Private translations Apr 30 00:10:57.903068 kernel: CPU features: detected: CRC32 instructions Apr 30 00:10:57.903075 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:10:57.903082 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:10:57.903090 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:10:57.903097 kernel: CPU features: detected: Privileged Access Never Apr 30 00:10:57.903104 kernel: CPU features: detected: RAS Extension Support Apr 30 00:10:57.903111 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:10:57.903118 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:10:57.903125 kernel: alternatives: applying system-wide alternatives Apr 30 00:10:57.903132 kernel: devtmpfs: initialized Apr 30 00:10:57.903140 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:10:57.903147 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:10:57.903156 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:10:57.903163 kernel: SMBIOS 3.0.0 present. Apr 30 00:10:57.903170 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Apr 30 00:10:57.903177 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:10:57.903184 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:10:57.903191 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:10:57.903198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:10:57.903206 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:10:57.903213 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Apr 30 00:10:57.903221 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:10:57.903228 kernel: cpuidle: using governor menu Apr 30 00:10:57.903236 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:10:57.903243 kernel: ASID allocator initialised with 32768 entries Apr 30 00:10:57.903250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:10:57.903257 kernel: Serial: AMBA PL011 UART driver Apr 30 00:10:57.903264 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:10:57.903272 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:10:57.903279 kernel: Modules: 508928 pages in range for PLT usage Apr 30 00:10:57.903287 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:10:57.903294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:10:57.903301 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:10:57.903308 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:10:57.903315 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:10:57.903322 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:10:57.903329 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:10:57.903336 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:10:57.903343 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:10:57.903352 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:10:57.903359 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:10:57.903366 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:10:57.903373 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:10:57.903380 kernel: ACPI: Interpreter enabled Apr 30 00:10:57.903387 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:10:57.903394 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:10:57.903401 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:10:57.903408 kernel: printk: console [ttyAMA0] enabled Apr 30 00:10:57.903416 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:10:57.903551 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:10:57.903625 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:10:57.903702 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:10:57.903777 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:10:57.903841 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:10:57.903851 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:10:57.903862 kernel: PCI host bridge to bus 0000:00 Apr 30 00:10:57.903935 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:10:57.903995 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:10:57.904053 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:10:57.904109 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:10:57.904191 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:10:57.904266 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:10:57.904336 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 30 00:10:57.904402 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 30 00:10:57.904467 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:10:57.904532 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:10:57.904596 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 30 00:10:57.904660 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 30 00:10:57.904791 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:10:57.904855 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:10:57.904912 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:10:57.904921 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:10:57.904929 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:10:57.904936 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:10:57.904943 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:10:57.904950 kernel: iommu: Default domain type: Translated Apr 30 00:10:57.904957 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:10:57.904966 kernel: efivars: Registered efivars operations Apr 30 00:10:57.904973 kernel: vgaarb: loaded Apr 30 00:10:57.904980 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:10:57.904987 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:10:57.904994 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:10:57.905001 kernel: pnp: PnP ACPI init Apr 30 00:10:57.905076 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:10:57.905086 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:10:57.905095 kernel: NET: Registered PF_INET protocol family Apr 30 00:10:57.905102 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:10:57.905109 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:10:57.905117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:10:57.905124 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:10:57.905131 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:10:57.905138 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:10:57.905145 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:10:57.905153 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:10:57.905161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:10:57.905168 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:10:57.905175 kernel: kvm [1]: HYP mode not available Apr 30 00:10:57.905183 kernel: Initialise system trusted keyrings Apr 30 00:10:57.905190 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:10:57.905197 kernel: Key type asymmetric registered Apr 30 00:10:57.905204 kernel: Asymmetric key parser 'x509' registered Apr 30 00:10:57.905210 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:10:57.905217 kernel: io scheduler mq-deadline registered Apr 30 00:10:57.905226 kernel: io scheduler kyber registered Apr 30 00:10:57.905233 kernel: io scheduler bfq registered Apr 30 00:10:57.905240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:10:57.905247 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:10:57.905254 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:10:57.905320 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 30 00:10:57.905330 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:10:57.905337 kernel: thunder_xcv, ver 1.0 Apr 30 00:10:57.905344 kernel: thunder_bgx, ver 1.0 Apr 30 00:10:57.905353 kernel: nicpf, ver 1.0 Apr 30 00:10:57.905360 kernel: nicvf, ver 1.0 Apr 30 00:10:57.905434 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:10:57.905510 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:10:57 UTC (1745971857) Apr 30 00:10:57.905520 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:10:57.905527 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:10:57.905536 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:10:57.905544 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:10:57.905557 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:10:57.905564 kernel: Segment Routing with IPv6 Apr 30 00:10:57.905572 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:10:57.905579 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:10:57.905586 kernel: Key type dns_resolver registered Apr 30 00:10:57.905593 kernel: registered taskstats version 1 Apr 30 00:10:57.905600 kernel: Loading compiled-in X.509 certificates Apr 30 00:10:57.905609 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: bbef389676bd9584646af24e9e264c7789f8630f' Apr 30 00:10:57.905617 kernel: Key type .fscrypt registered Apr 30 00:10:57.905625 kernel: Key type fscrypt-provisioning registered Apr 30 00:10:57.905633 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:10:57.905640 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:10:57.905648 kernel: ima: No architecture policies found Apr 30 00:10:57.905659 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:10:57.905669 kernel: clk: Disabling unused clocks Apr 30 00:10:57.905686 kernel: Freeing unused kernel memory: 39744K Apr 30 00:10:57.905693 kernel: Run /init as init process Apr 30 00:10:57.905703 kernel: with arguments: Apr 30 00:10:57.905714 kernel: /init Apr 30 00:10:57.905721 kernel: with environment: Apr 30 00:10:57.905728 kernel: HOME=/ Apr 30 00:10:57.905735 kernel: TERM=linux Apr 30 00:10:57.905747 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:10:57.905756 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:10:57.905765 systemd[1]: Detected virtualization kvm. Apr 30 00:10:57.905772 systemd[1]: Detected architecture arm64. Apr 30 00:10:57.905782 systemd[1]: Running in initrd. Apr 30 00:10:57.905789 systemd[1]: No hostname configured, using default hostname. Apr 30 00:10:57.905796 systemd[1]: Hostname set to . Apr 30 00:10:57.905804 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:10:57.905812 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:10:57.905819 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:10:57.905827 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:10:57.905835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:10:57.905844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:10:57.905852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:10:57.905860 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:10:57.905869 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:10:57.905877 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:10:57.905885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:10:57.905893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:10:57.905902 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:10:57.905910 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:10:57.905918 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:10:57.905926 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:10:57.905933 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:10:57.905941 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:10:57.905949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:10:57.905957 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:10:57.905966 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:10:57.905974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:10:57.905982 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:10:57.905990 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:10:57.905998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:10:57.906006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:10:57.906014 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:10:57.906022 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:10:57.906029 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:10:57.906038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:10:57.906046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:10:57.906054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:10:57.906062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:10:57.906070 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:10:57.906096 systemd-journald[239]: Collecting audit messages is disabled. Apr 30 00:10:57.906117 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:10:57.906126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:10:57.906136 systemd-journald[239]: Journal started Apr 30 00:10:57.906155 systemd-journald[239]: Runtime Journal (/run/log/journal/1202fc91d0c542b2a7429a81d7ba3885) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:10:57.895811 systemd-modules-load[240]: Inserted module 'overlay' Apr 30 00:10:57.909426 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:10:57.909448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:10:57.912048 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:10:57.912528 systemd-modules-load[240]: Inserted module 'br_netfilter' Apr 30 00:10:57.913293 kernel: Bridge firewalling registered Apr 30 00:10:57.914708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:10:57.922842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:10:57.924344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:10:57.926083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:10:57.928448 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:10:57.934319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:10:57.935365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:10:57.943593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:10:57.949937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:10:57.950886 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:10:57.953214 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:10:57.966175 dracut-cmdline[275]: dracut-dracut-053 Apr 30 00:10:57.968583 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 30 00:10:57.978087 systemd-resolved[273]: Positive Trust Anchors: Apr 30 00:10:57.978157 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:10:57.978189 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:10:57.982927 systemd-resolved[273]: Defaulting to hostname 'linux'. Apr 30 00:10:57.984128 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:10:57.985671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:10:58.043710 kernel: SCSI subsystem initialized Apr 30 00:10:58.048695 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:10:58.055702 kernel: iscsi: registered transport (tcp) Apr 30 00:10:58.068723 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:10:58.068767 kernel: QLogic iSCSI HBA Driver Apr 30 00:10:58.111196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:10:58.120850 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:10:58.137740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:10:58.137802 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:10:58.138698 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:10:58.185706 kernel: raid6: neonx8 gen() 15619 MB/s Apr 30 00:10:58.202692 kernel: raid6: neonx4 gen() 15455 MB/s Apr 30 00:10:58.226699 kernel: raid6: neonx2 gen() 13146 MB/s Apr 30 00:10:58.243695 kernel: raid6: neonx1 gen() 10454 MB/s Apr 30 00:10:58.260691 kernel: raid6: int64x8 gen() 6941 MB/s Apr 30 00:10:58.277690 kernel: raid6: int64x4 gen() 7324 MB/s Apr 30 00:10:58.294694 kernel: raid6: int64x2 gen() 6114 MB/s Apr 30 00:10:58.311689 kernel: raid6: int64x1 gen() 5043 MB/s Apr 30 00:10:58.311705 kernel: raid6: using algorithm neonx8 gen() 15619 MB/s Apr 30 00:10:58.328702 kernel: raid6: .... xor() 11911 MB/s, rmw enabled Apr 30 00:10:58.328716 kernel: raid6: using neon recovery algorithm Apr 30 00:10:58.333696 kernel: xor: measuring software checksum speed Apr 30 00:10:58.334694 kernel: 8regs : 18022 MB/sec Apr 30 00:10:58.334706 kernel: 32regs : 19636 MB/sec Apr 30 00:10:58.335691 kernel: arm64_neon : 24382 MB/sec Apr 30 00:10:58.335702 kernel: xor: using function: arm64_neon (24382 MB/sec) Apr 30 00:10:58.388357 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:10:58.399238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:10:58.412876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:10:58.423790 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 30 00:10:58.426961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:10:58.436841 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:10:58.449379 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 30 00:10:58.476068 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:10:58.487885 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:10:58.528378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:10:58.535877 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:10:58.547286 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:10:58.548991 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:10:58.552576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:10:58.553475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:10:58.562852 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:10:58.569940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:10:58.579160 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 30 00:10:58.595274 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:10:58.595377 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:10:58.595389 kernel: GPT:9289727 != 19775487 Apr 30 00:10:58.595398 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:10:58.595407 kernel: GPT:9289727 != 19775487 Apr 30 00:10:58.595421 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:10:58.595432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:10:58.585711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:10:58.585830 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:10:58.586807 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:10:58.587535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:10:58.587652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:10:58.590238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:10:58.600253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:10:58.611608 kernel: BTRFS: device fsid 9647859b-527c-478f-8aa1-9dfa3fa871e3 devid 1 transid 43 /dev/vda3 scanned by (udev-worker) (515) Apr 30 00:10:58.611629 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (518) Apr 30 00:10:58.619102 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:10:58.620996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:10:58.628043 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:10:58.631544 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:10:58.632565 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:10:58.638352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:10:58.649869 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:10:58.651892 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:10:58.656652 disk-uuid[547]: Primary Header is updated. Apr 30 00:10:58.656652 disk-uuid[547]: Secondary Entries is updated. Apr 30 00:10:58.656652 disk-uuid[547]: Secondary Header is updated. Apr 30 00:10:58.659726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:10:58.674715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:10:59.674563 disk-uuid[549]: The operation has completed successfully. Apr 30 00:10:59.675463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:10:59.705066 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:10:59.705169 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:10:59.724860 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:10:59.728717 sh[572]: Success Apr 30 00:10:59.744620 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:10:59.775552 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:10:59.791077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:10:59.792793 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:10:59.807828 kernel: BTRFS info (device dm-0): first mount of filesystem 9647859b-527c-478f-8aa1-9dfa3fa871e3 Apr 30 00:10:59.807881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:10:59.807891 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:10:59.809201 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:10:59.809239 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:10:59.813918 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:10:59.815084 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:10:59.825824 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:10:59.827215 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:10:59.835331 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:10:59.835375 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:10:59.835983 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:10:59.838748 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:10:59.845228 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:10:59.846759 kernel: BTRFS info (device vda6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:10:59.852691 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:10:59.858856 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:10:59.920393 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:10:59.928836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:10:59.953339 systemd-networkd[759]: lo: Link UP Apr 30 00:10:59.953351 systemd-networkd[759]: lo: Gained carrier Apr 30 00:10:59.954094 systemd-networkd[759]: Enumeration completed Apr 30 00:10:59.954519 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:10:59.955418 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:10:59.955421 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:10:59.956340 systemd[1]: Reached target network.target - Network. Apr 30 00:10:59.957027 systemd-networkd[759]: eth0: Link UP Apr 30 00:10:59.957031 systemd-networkd[759]: eth0: Gained carrier Apr 30 00:10:59.957039 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:10:59.963368 ignition[664]: Ignition 2.20.0 Apr 30 00:10:59.963374 ignition[664]: Stage: fetch-offline Apr 30 00:10:59.963410 ignition[664]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:10:59.963418 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:10:59.963589 ignition[664]: parsed url from cmdline: "" Apr 30 00:10:59.963593 ignition[664]: no config URL provided Apr 30 00:10:59.963597 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:10:59.963605 ignition[664]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:10:59.963632 ignition[664]: op(1): [started] loading QEMU firmware config module Apr 30 00:10:59.963637 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:10:59.969707 ignition[664]: op(1): [finished] loading QEMU firmware config module Apr 30 00:10:59.979759 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:11:00.006753 ignition[664]: parsing config with SHA512: 47fe6c0fae546432ca3d40129b13b46ed0f8af6e714644e798d16376738c60d47788f54825327efba7e7593b3e0b04f25d74362977983834c0b194186c956f02 Apr 30 00:11:00.013741 unknown[664]: fetched base config from "system" Apr 30 00:11:00.013756 unknown[664]: fetched user config from "qemu" Apr 30 00:11:00.014347 ignition[664]: fetch-offline: fetch-offline passed Apr 30 00:11:00.014722 ignition[664]: Ignition finished successfully Apr 30 00:11:00.016492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:11:00.017882 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:11:00.026901 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:11:00.037444 ignition[770]: Ignition 2.20.0 Apr 30 00:11:00.037454 ignition[770]: Stage: kargs Apr 30 00:11:00.037614 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:11:00.037623 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:11:00.038481 ignition[770]: kargs: kargs passed Apr 30 00:11:00.038525 ignition[770]: Ignition finished successfully Apr 30 00:11:00.042234 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:11:00.053818 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:11:00.062979 ignition[778]: Ignition 2.20.0 Apr 30 00:11:00.062988 ignition[778]: Stage: disks Apr 30 00:11:00.063143 ignition[778]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:11:00.063151 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:11:00.064045 ignition[778]: disks: disks passed Apr 30 00:11:00.065722 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:11:00.064092 ignition[778]: Ignition finished successfully Apr 30 00:11:00.068792 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:11:00.069585 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:11:00.071129 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:11:00.072494 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:11:00.073839 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:11:00.084818 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:11:00.095067 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:11:00.099143 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:11:00.107812 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:11:00.148703 kernel: EXT4-fs (vda9): mounted filesystem cd2ccabc-5b27-4350-bc86-21c9a8411827 r/w with ordered data mode. Quota mode: none. Apr 30 00:11:00.148939 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:11:00.150007 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:11:00.165792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:11:00.167462 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:11:00.168284 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:11:00.168327 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:11:00.168349 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:11:00.174535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:11:00.176717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:11:00.180293 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Apr 30 00:11:00.180319 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:11:00.180330 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:11:00.181027 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:11:00.187696 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:11:00.188816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:11:00.225317 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:11:00.229701 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:11:00.233874 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:11:00.237712 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:11:00.307531 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:11:00.318849 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:11:00.320311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:11:00.325696 kernel: BTRFS info (device vda6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:11:00.344367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:11:00.349237 ignition[912]: INFO : Ignition 2.20.0 Apr 30 00:11:00.349237 ignition[912]: INFO : Stage: mount Apr 30 00:11:00.351214 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:11:00.351214 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:11:00.351214 ignition[912]: INFO : mount: mount passed Apr 30 00:11:00.351214 ignition[912]: INFO : Ignition finished successfully Apr 30 00:11:00.352041 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:11:00.357795 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:11:00.806892 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:11:00.823886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:11:00.836703 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Apr 30 00:11:00.839126 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:11:00.839142 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:11:00.839152 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:11:00.842695 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:11:00.844157 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:11:00.865289 ignition[943]: INFO : Ignition 2.20.0 Apr 30 00:11:00.865289 ignition[943]: INFO : Stage: files Apr 30 00:11:00.866584 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:11:00.866584 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:11:00.866584 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:11:00.869392 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:11:00.869392 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:11:00.869392 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:11:00.872612 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:11:00.872612 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:11:00.872612 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:11:00.872612 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:11:00.869791 unknown[943]: wrote ssh authorized keys file for user: core Apr 30 00:11:00.967180 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:11:01.041789 systemd-networkd[759]: eth0: Gained IPv6LL Apr 30 00:11:01.073379 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:11:01.073379 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:11:01.076602 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Apr 30 00:11:01.349618 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:11:01.750804 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:11:01.750804 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 00:11:01.753970 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:11:01.784022 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:11:01.787796 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:11:01.789934 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:11:01.789934 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:11:01.789934 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:11:01.789934 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:11:01.789934 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:11:01.789934 ignition[943]: INFO : files: files passed Apr 30 00:11:01.789934 ignition[943]: INFO : Ignition finished successfully Apr 30 00:11:01.790754 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:11:01.800842 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:11:01.803910 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:11:01.805235 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:11:01.806611 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:11:01.811222 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:11:01.814812 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:11:01.814812 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:11:01.817667 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:11:01.817458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:11:01.820111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:11:01.834858 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:11:01.854393 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:11:01.855360 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:11:01.856578 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:11:01.858359 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:11:01.859837 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:11:01.870874 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:11:01.884745 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:11:01.895849 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:11:01.903814 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:11:01.904834 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:11:01.906489 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:11:01.907934 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:11:01.908051 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:11:01.909934 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:11:01.911413 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:11:01.912619 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:11:01.913915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:11:01.915419 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:11:01.916981 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:11:01.918374 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:11:01.919813 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:11:01.921311 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:11:01.922718 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:11:01.923982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:11:01.924103 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:11:01.925982 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:11:01.927404 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:11:01.928806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:11:01.929774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:11:01.931043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:11:01.931150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:11:01.933184 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:11:01.933292 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:11:01.934692 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:11:01.935840 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:11:01.940725 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:11:01.941658 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:11:01.943300 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:11:01.944507 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:11:01.944598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:11:01.945729 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:11:01.945804 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:11:01.946983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:11:01.947082 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:11:01.948497 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:11:01.948589 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:11:01.960830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:11:01.962170 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:11:01.962887 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:11:01.962993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:11:01.964553 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:11:01.964645 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:11:01.969076 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:11:01.970574 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:11:01.975392 ignition[999]: INFO : Ignition 2.20.0 Apr 30 00:11:01.975392 ignition[999]: INFO : Stage: umount Apr 30 00:11:01.977613 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:11:01.977613 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:11:01.977613 ignition[999]: INFO : umount: umount passed Apr 30 00:11:01.977613 ignition[999]: INFO : Ignition finished successfully Apr 30 00:11:01.975412 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:11:01.978450 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:11:01.978539 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:11:01.979538 systemd[1]: Stopped target network.target - Network. Apr 30 00:11:01.980942 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:11:01.980996 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:11:01.982234 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:11:01.982272 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:11:01.983605 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:11:01.983647 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:11:01.984954 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:11:01.984991 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:11:01.988960 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:11:01.990481 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:11:01.997716 systemd-networkd[759]: eth0: DHCPv6 lease lost Apr 30 00:11:01.998370 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:11:01.998508 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:11:02.000626 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:11:02.001743 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:11:02.003443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:11:02.003492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:11:02.013784 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:11:02.014535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:11:02.014590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:11:02.016456 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:11:02.016502 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:11:02.018026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:11:02.018080 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:11:02.019796 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:11:02.019835 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:11:02.022635 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:11:02.026740 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:11:02.026828 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:11:02.028541 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:11:02.028634 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:11:02.034135 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:11:02.034221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:11:02.047322 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:11:02.047481 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:11:02.049516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:11:02.049556 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:11:02.050966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:11:02.050994 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:11:02.052475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:11:02.052519 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:11:02.055641 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:11:02.055693 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:11:02.057980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:11:02.058020 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:11:02.068908 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:11:02.069780 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:11:02.069833 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:11:02.071579 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:11:02.071616 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:11:02.073241 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:11:02.073280 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:11:02.075056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:11:02.075094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:11:02.076944 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:11:02.077027 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:11:02.079028 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:11:02.081025 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:11:02.091508 systemd[1]: Switching root. Apr 30 00:11:02.116501 systemd-journald[239]: Journal stopped Apr 30 00:11:02.852510 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Apr 30 00:11:02.852571 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:11:02.852583 kernel: SELinux: policy capability open_perms=1 Apr 30 00:11:02.852593 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:11:02.852604 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:11:02.852614 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:11:02.852624 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:11:02.852633 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:11:02.852646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:11:02.852656 kernel: audit: type=1403 audit(1745971862.271:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:11:02.852666 systemd[1]: Successfully loaded SELinux policy in 37.538ms. Apr 30 00:11:02.852710 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.609ms. Apr 30 00:11:02.852731 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:11:02.852745 systemd[1]: Detected virtualization kvm. Apr 30 00:11:02.852755 systemd[1]: Detected architecture arm64. Apr 30 00:11:02.852765 systemd[1]: Detected first boot. Apr 30 00:11:02.852775 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:11:02.852788 zram_generator::config[1045]: No configuration found. Apr 30 00:11:02.852801 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:11:02.852811 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:11:02.852822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:11:02.852832 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:11:02.852845 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:11:02.852871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:11:02.852882 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:11:02.852895 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:11:02.852906 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:11:02.852917 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:11:02.852928 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:11:02.852938 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:11:02.852949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:11:02.852960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:11:02.852970 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:11:02.852981 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:11:02.852993 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:11:02.853004 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:11:02.853015 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:11:02.853025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:11:02.853036 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:11:02.853046 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:11:02.853058 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:11:02.853070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:11:02.853081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:11:02.853093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:11:02.853103 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:11:02.853114 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:11:02.853124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:11:02.853135 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:11:02.853146 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:11:02.853157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:11:02.853167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:11:02.853179 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:11:02.853190 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:11:02.853200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:11:02.853211 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:11:02.853221 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:11:02.853232 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:11:02.853243 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:11:02.853254 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:11:02.853267 systemd[1]: Reached target machines.target - Containers. Apr 30 00:11:02.853277 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:11:02.853292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:11:02.853303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:11:02.853313 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:11:02.853325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:11:02.853336 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:11:02.853346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:11:02.853357 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:11:02.853369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:11:02.853380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:11:02.853391 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:11:02.853401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:11:02.853412 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:11:02.853423 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:11:02.853433 kernel: fuse: init (API version 7.39) Apr 30 00:11:02.853443 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:11:02.853453 kernel: loop: module loaded Apr 30 00:11:02.853466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:11:02.853476 kernel: ACPI: bus type drm_connector registered Apr 30 00:11:02.853486 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:11:02.853496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:11:02.853507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:11:02.853518 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:11:02.853528 systemd[1]: Stopped verity-setup.service. Apr 30 00:11:02.853539 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:11:02.853550 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:11:02.853582 systemd-journald[1112]: Collecting audit messages is disabled. Apr 30 00:11:02.853603 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:11:02.853614 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:11:02.853625 systemd-journald[1112]: Journal started Apr 30 00:11:02.853650 systemd-journald[1112]: Runtime Journal (/run/log/journal/1202fc91d0c542b2a7429a81d7ba3885) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:11:02.646227 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:11:02.662289 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:11:02.662639 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:11:02.857225 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:11:02.857921 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:11:02.859092 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:11:02.861717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:11:02.863088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:11:02.863237 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:11:02.864717 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:11:02.866004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:11:02.866161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:11:02.867545 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:11:02.867756 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:11:02.869157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:11:02.869299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:11:02.870729 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:11:02.870879 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:11:02.872091 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:11:02.872232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:11:02.874747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:11:02.876548 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:11:02.878060 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:11:02.892384 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:11:02.910009 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:11:02.912228 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:11:02.913252 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:11:02.913294 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:11:02.915179 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:11:02.917579 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:11:02.919716 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:11:02.920622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:11:02.922182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:11:02.924908 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:11:02.925925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:11:02.929000 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:11:02.930027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:11:02.931973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:11:02.933795 systemd-journald[1112]: Time spent on flushing to /var/log/journal/1202fc91d0c542b2a7429a81d7ba3885 is 25.055ms for 855 entries. Apr 30 00:11:02.933795 systemd-journald[1112]: System Journal (/var/log/journal/1202fc91d0c542b2a7429a81d7ba3885) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:11:02.968880 systemd-journald[1112]: Received client request to flush runtime journal. Apr 30 00:11:02.968918 kernel: loop0: detected capacity change from 0 to 113536 Apr 30 00:11:02.939908 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:11:02.945090 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:11:02.947601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:11:02.948971 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:11:02.950556 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:11:02.951951 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:11:02.955100 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:11:02.958291 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:11:02.967899 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:11:02.973911 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:11:02.982170 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:11:02.982594 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Apr 30 00:11:02.982606 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Apr 30 00:11:02.984423 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:11:02.987413 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:11:02.989653 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:11:02.995789 kernel: loop1: detected capacity change from 0 to 116808 Apr 30 00:11:03.007000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:11:03.012351 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:11:03.013242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:11:03.015709 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:11:03.039207 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:11:03.039722 kernel: loop2: detected capacity change from 0 to 189592 Apr 30 00:11:03.051946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:11:03.067427 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Apr 30 00:11:03.067449 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Apr 30 00:11:03.071548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:11:03.087716 kernel: loop3: detected capacity change from 0 to 113536 Apr 30 00:11:03.093729 kernel: loop4: detected capacity change from 0 to 116808 Apr 30 00:11:03.101769 kernel: loop5: detected capacity change from 0 to 189592 Apr 30 00:11:03.108213 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:11:03.108755 (sd-merge)[1183]: Merged extensions into '/usr'. Apr 30 00:11:03.112260 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:11:03.112276 systemd[1]: Reloading... Apr 30 00:11:03.166736 zram_generator::config[1207]: No configuration found. Apr 30 00:11:03.264701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:11:03.265736 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:11:03.300858 systemd[1]: Reloading finished in 188 ms. Apr 30 00:11:03.331509 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:11:03.334094 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:11:03.344895 systemd[1]: Starting ensure-sysext.service... Apr 30 00:11:03.346494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:11:03.358500 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:11:03.358515 systemd[1]: Reloading... Apr 30 00:11:03.369710 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:11:03.369968 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:11:03.370579 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:11:03.371205 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Apr 30 00:11:03.371315 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Apr 30 00:11:03.373771 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:11:03.373871 systemd-tmpfiles[1246]: Skipping /boot Apr 30 00:11:03.380577 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:11:03.380916 systemd-tmpfiles[1246]: Skipping /boot Apr 30 00:11:03.407703 zram_generator::config[1276]: No configuration found. Apr 30 00:11:03.491088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:11:03.527742 systemd[1]: Reloading finished in 168 ms. Apr 30 00:11:03.540912 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:11:03.560149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:11:03.568921 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:11:03.571305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:11:03.573749 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:11:03.577969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:11:03.582032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:11:03.585753 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:11:03.589634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:11:03.593054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:11:03.596789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:11:03.601419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:11:03.609140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:11:03.610033 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:11:03.611736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:11:03.611872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:11:03.613266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:11:03.613392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:11:03.614713 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:11:03.614850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:11:03.618250 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Apr 30 00:11:03.624177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:11:03.642017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:11:03.644076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:11:03.647536 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:11:03.648549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:11:03.651415 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:11:03.657440 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:11:03.659435 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:11:03.662513 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:11:03.666890 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:11:03.670411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:11:03.670550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:11:03.672101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:11:03.672234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:11:03.688786 systemd[1]: Finished ensure-sysext.service. Apr 30 00:11:03.689793 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:11:03.691842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:11:03.695153 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:11:03.695616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:11:03.705852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:11:03.708916 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:11:03.710828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:11:03.712881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:11:03.714532 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:11:03.720608 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:11:03.721882 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:11:03.722513 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:11:03.724364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:11:03.724501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:11:03.727729 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1334) Apr 30 00:11:03.728490 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:11:03.728628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:11:03.736099 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:11:03.737357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:11:03.737509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:11:03.777710 augenrules[1392]: No rules Apr 30 00:11:03.771331 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:11:03.773735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:11:03.781667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:11:03.790906 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:11:03.791864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:11:03.791952 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:11:03.818651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:11:03.840193 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:11:03.841804 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:11:03.852367 systemd-networkd[1376]: lo: Link UP Apr 30 00:11:03.852374 systemd-networkd[1376]: lo: Gained carrier Apr 30 00:11:03.853409 systemd-networkd[1376]: Enumeration completed Apr 30 00:11:03.853770 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:11:03.860792 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:11:03.860801 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:11:03.861485 systemd-networkd[1376]: eth0: Link UP Apr 30 00:11:03.861492 systemd-networkd[1376]: eth0: Gained carrier Apr 30 00:11:03.861505 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:11:03.871354 systemd-resolved[1312]: Positive Trust Anchors: Apr 30 00:11:03.871471 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:11:03.871507 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:11:03.874920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:11:03.884039 systemd-resolved[1312]: Defaulting to hostname 'linux'. Apr 30 00:11:03.885243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:11:03.889631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:11:03.889826 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:11:03.890368 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Apr 30 00:11:03.890892 systemd[1]: Reached target network.target - Network. Apr 30 00:11:03.890960 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:11:03.891003 systemd-timesyncd[1377]: Initial clock synchronization to Wed 2025-04-30 00:11:03.984681 UTC. Apr 30 00:11:03.891643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:11:03.895141 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:11:03.908919 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:11:03.926512 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:11:03.948501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:11:03.966587 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:11:03.969229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:11:03.970277 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:11:03.971334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:11:03.972365 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:11:03.973642 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:11:03.974600 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:11:03.975646 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:11:03.976695 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:11:03.976741 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:11:03.977504 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:11:03.980515 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:11:03.983048 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:11:03.992134 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:11:03.995273 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:11:03.997728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:11:03.998774 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:11:03.999622 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:11:04.000477 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:11:04.000505 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:11:04.001609 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:11:04.003657 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:11:04.005100 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:11:04.007866 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:11:04.009973 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:11:04.011125 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:11:04.014927 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:11:04.018529 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:11:04.021918 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:11:04.027956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:11:04.046446 jq[1421]: false Apr 30 00:11:04.050927 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:11:04.054066 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:11:04.054612 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:11:04.061911 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:11:04.065814 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:11:04.066810 extend-filesystems[1422]: Found loop3 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found loop4 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found loop5 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda1 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda2 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda3 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found usr Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda4 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda6 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda7 Apr 30 00:11:04.068015 extend-filesystems[1422]: Found vda9 Apr 30 00:11:04.068015 extend-filesystems[1422]: Checking size of /dev/vda9 Apr 30 00:11:04.067566 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:11:04.088948 extend-filesystems[1422]: Resized partition /dev/vda9 Apr 30 00:11:04.073564 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:11:04.101295 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:11:04.073788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:11:04.102406 jq[1437]: true Apr 30 00:11:04.074051 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:11:04.074220 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:11:04.088103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:11:04.088327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:11:04.104694 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:11:04.113927 dbus-daemon[1420]: [system] SELinux support is enabled Apr 30 00:11:04.114165 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:11:04.123735 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1334) Apr 30 00:11:04.144512 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:11:04.144567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:11:04.147112 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:11:04.147127 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:11:04.149708 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:11:04.151957 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:11:04.196098 tar[1442]: linux-arm64/helm Apr 30 00:11:04.196397 jq[1446]: true Apr 30 00:11:04.197107 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:11:04.198924 systemd-logind[1431]: New seat seat0. Apr 30 00:11:04.200656 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:11:04.215580 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:11:04.215580 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:11:04.215580 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:11:04.219750 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Apr 30 00:11:04.219480 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:11:04.221547 update_engine[1435]: I20250430 00:11:04.219916 1435 main.cc:92] Flatcar Update Engine starting Apr 30 00:11:04.219723 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:11:04.224852 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:11:04.225717 update_engine[1435]: I20250430 00:11:04.225198 1435 update_check_scheduler.cc:74] Next update check in 6m20s Apr 30 00:11:04.233614 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:11:04.234081 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:11:04.236913 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:11:04.238511 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:11:04.335645 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:11:04.452181 containerd[1452]: time="2025-04-30T00:11:04.452099333Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:11:04.486207 containerd[1452]: time="2025-04-30T00:11:04.486021107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.487672 containerd[1452]: time="2025-04-30T00:11:04.487638173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:11:04.487788 containerd[1452]: time="2025-04-30T00:11:04.487773281Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.487844013Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488009297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488030259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488084978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488096324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488252675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488266637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488278948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488287760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488354911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488723 containerd[1452]: time="2025-04-30T00:11:04.488535403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488971 containerd[1452]: time="2025-04-30T00:11:04.488639450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:11:04.488971 containerd[1452]: time="2025-04-30T00:11:04.488652647Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:11:04.488971 containerd[1452]: time="2025-04-30T00:11:04.488747802Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:11:04.488971 containerd[1452]: time="2025-04-30T00:11:04.488788559Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:11:04.494529 containerd[1452]: time="2025-04-30T00:11:04.494083182Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:11:04.494529 containerd[1452]: time="2025-04-30T00:11:04.494140235Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:11:04.494529 containerd[1452]: time="2025-04-30T00:11:04.494158662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:11:04.494529 containerd[1452]: time="2025-04-30T00:11:04.494175641Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:11:04.494529 containerd[1452]: time="2025-04-30T00:11:04.494192218Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:11:04.494529 containerd[1452]: time="2025-04-30T00:11:04.494363375Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:11:04.494709 containerd[1452]: time="2025-04-30T00:11:04.494669963Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:11:04.494842 containerd[1452]: time="2025-04-30T00:11:04.494817140Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:11:04.494842 containerd[1452]: time="2025-04-30T00:11:04.494841040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:11:04.494890 containerd[1452]: time="2025-04-30T00:11:04.494856409Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:11:04.494890 containerd[1452]: time="2025-04-30T00:11:04.494870331Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.494890 containerd[1452]: time="2025-04-30T00:11:04.494883326Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.494938 containerd[1452]: time="2025-04-30T00:11:04.494896201Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.494938 containerd[1452]: time="2025-04-30T00:11:04.494909076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.494938 containerd[1452]: time="2025-04-30T00:11:04.494922716Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.494938 containerd[1452]: time="2025-04-30T00:11:04.494935390Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.494947661Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.494958686Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.494982706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.494996908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.495007973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.495019078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.495030303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.495043621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495060 containerd[1452]: time="2025-04-30T00:11:04.495055329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495067319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495080113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495094598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495106266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495117692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495129200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495143000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495167422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495180901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495212 containerd[1452]: time="2025-04-30T00:11:04.495191563Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:11:04.495380 containerd[1452]: time="2025-04-30T00:11:04.495357370Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:11:04.495380 containerd[1452]: time="2025-04-30T00:11:04.495373021Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:11:04.495420 containerd[1452]: time="2025-04-30T00:11:04.495382395Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:11:04.495420 containerd[1452]: time="2025-04-30T00:11:04.495393460Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:11:04.495420 containerd[1452]: time="2025-04-30T00:11:04.495401788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495420 containerd[1452]: time="2025-04-30T00:11:04.495414824Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:11:04.495556 containerd[1452]: time="2025-04-30T00:11:04.495540759Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:11:04.495556 containerd[1452]: time="2025-04-30T00:11:04.495556007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:11:04.495948 containerd[1452]: time="2025-04-30T00:11:04.495899047Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:11:04.496059 containerd[1452]: time="2025-04-30T00:11:04.495949340Z" level=info msg="Connect containerd service" Apr 30 00:11:04.496059 containerd[1452]: time="2025-04-30T00:11:04.495983701Z" level=info msg="using legacy CRI server" Apr 30 00:11:04.496059 containerd[1452]: time="2025-04-30T00:11:04.495990661Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:11:04.496229 containerd[1452]: time="2025-04-30T00:11:04.496213480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:11:04.496951 containerd[1452]: time="2025-04-30T00:11:04.496923499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:11:04.497147 containerd[1452]: time="2025-04-30T00:11:04.497119883Z" level=info msg="Start subscribing containerd event" Apr 30 00:11:04.497180 containerd[1452]: time="2025-04-30T00:11:04.497162975Z" level=info msg="Start recovering state" Apr 30 00:11:04.497491 containerd[1452]: time="2025-04-30T00:11:04.497226103Z" level=info msg="Start event monitor" Apr 30 00:11:04.497491 containerd[1452]: time="2025-04-30T00:11:04.497239581Z" level=info msg="Start snapshots syncer" Apr 30 00:11:04.497491 containerd[1452]: time="2025-04-30T00:11:04.497248875Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:11:04.497491 containerd[1452]: time="2025-04-30T00:11:04.497255957Z" level=info msg="Start streaming server" Apr 30 00:11:04.497903 containerd[1452]: time="2025-04-30T00:11:04.497882207Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:11:04.497943 containerd[1452]: time="2025-04-30T00:11:04.497928960Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:11:04.498061 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:11:04.499042 containerd[1452]: time="2025-04-30T00:11:04.498906739Z" level=info msg="containerd successfully booted in 0.049325s" Apr 30 00:11:04.549000 tar[1442]: linux-arm64/LICENSE Apr 30 00:11:04.549198 tar[1442]: linux-arm64/README.md Apr 30 00:11:04.562165 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:11:05.136898 systemd-networkd[1376]: eth0: Gained IPv6LL Apr 30 00:11:05.139795 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:11:05.142575 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:11:05.157033 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:11:05.159854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:05.162226 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:11:05.184992 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:11:05.185266 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:11:05.187040 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:11:05.197543 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:11:05.790482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:05.796132 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:11:06.439407 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:11:06.459766 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:11:06.468002 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:11:06.474998 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:11:06.475259 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:11:06.478037 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:11:06.495723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:11:06.509114 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:11:06.511599 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:11:06.513044 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:11:06.514117 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:11:06.518771 systemd[1]: Startup finished in 572ms (kernel) + 4.552s (initrd) + 4.287s (userspace) = 9.413s. Apr 30 00:11:06.535384 kubelet[1516]: E0430 00:11:06.535338 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:11:06.537906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:11:06.538060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:11:10.689252 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:11:10.690304 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:53598.service - OpenSSH per-connection server daemon (10.0.0.1:53598). Apr 30 00:11:10.759819 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 53598 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:10.761591 sshd-session[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:10.779300 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:11:10.790955 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:11:10.792609 systemd-logind[1431]: New session 1 of user core. Apr 30 00:11:10.806583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:11:10.808574 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:11:10.815628 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:11:10.888540 systemd[1550]: Queued start job for default target default.target. Apr 30 00:11:10.899618 systemd[1550]: Created slice app.slice - User Application Slice. Apr 30 00:11:10.899646 systemd[1550]: Reached target paths.target - Paths. Apr 30 00:11:10.899658 systemd[1550]: Reached target timers.target - Timers. Apr 30 00:11:10.900874 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:11:10.910498 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:11:10.910556 systemd[1550]: Reached target sockets.target - Sockets. Apr 30 00:11:10.910567 systemd[1550]: Reached target basic.target - Basic System. Apr 30 00:11:10.910601 systemd[1550]: Reached target default.target - Main User Target. Apr 30 00:11:10.910635 systemd[1550]: Startup finished in 89ms. Apr 30 00:11:10.910923 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:11:10.912113 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:11:10.974585 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:53612.service - OpenSSH per-connection server daemon (10.0.0.1:53612). Apr 30 00:11:11.032618 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 53612 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:11.033900 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:11.038246 systemd-logind[1431]: New session 2 of user core. Apr 30 00:11:11.047869 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:11:11.098996 sshd[1563]: Connection closed by 10.0.0.1 port 53612 Apr 30 00:11:11.099427 sshd-session[1561]: pam_unix(sshd:session): session closed for user core Apr 30 00:11:11.112026 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:53612.service: Deactivated successfully. Apr 30 00:11:11.113270 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:11:11.115833 systemd-logind[1431]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:11:11.117113 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:53624.service - OpenSSH per-connection server daemon (10.0.0.1:53624). Apr 30 00:11:11.117898 systemd-logind[1431]: Removed session 2. Apr 30 00:11:11.162153 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 53624 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:11.163329 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:11.166885 systemd-logind[1431]: New session 3 of user core. Apr 30 00:11:11.177899 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:11:11.227328 sshd[1570]: Connection closed by 10.0.0.1 port 53624 Apr 30 00:11:11.227180 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Apr 30 00:11:11.247172 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:53624.service: Deactivated successfully. Apr 30 00:11:11.250821 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:11:11.252870 systemd-logind[1431]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:11:11.253508 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:53634.service - OpenSSH per-connection server daemon (10.0.0.1:53634). Apr 30 00:11:11.254576 systemd-logind[1431]: Removed session 3. Apr 30 00:11:11.303763 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 53634 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:11.305165 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:11.308867 systemd-logind[1431]: New session 4 of user core. Apr 30 00:11:11.321875 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:11:11.374385 sshd[1577]: Connection closed by 10.0.0.1 port 53634 Apr 30 00:11:11.374274 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Apr 30 00:11:11.392657 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:53634.service: Deactivated successfully. Apr 30 00:11:11.394260 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:11:11.395414 systemd-logind[1431]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:11:11.396525 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:53650.service - OpenSSH per-connection server daemon (10.0.0.1:53650). Apr 30 00:11:11.397366 systemd-logind[1431]: Removed session 4. Apr 30 00:11:11.442783 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 53650 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:11.444066 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:11.448542 systemd-logind[1431]: New session 5 of user core. Apr 30 00:11:11.454830 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:11:11.519725 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:11:11.520023 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:11:11.532508 sudo[1585]: pam_unix(sudo:session): session closed for user root Apr 30 00:11:11.535585 sshd[1584]: Connection closed by 10.0.0.1 port 53650 Apr 30 00:11:11.535980 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Apr 30 00:11:11.546041 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:53650.service: Deactivated successfully. Apr 30 00:11:11.547542 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:11:11.548846 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:11:11.550191 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:53654.service - OpenSSH per-connection server daemon (10.0.0.1:53654). Apr 30 00:11:11.550890 systemd-logind[1431]: Removed session 5. Apr 30 00:11:11.603128 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 53654 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:11.604492 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:11.608508 systemd-logind[1431]: New session 6 of user core. Apr 30 00:11:11.624861 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:11:11.676918 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:11:11.677186 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:11:11.680548 sudo[1594]: pam_unix(sudo:session): session closed for user root Apr 30 00:11:11.685286 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:11:11.685555 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:11:11.706012 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:11:11.729341 augenrules[1616]: No rules Apr 30 00:11:11.730812 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:11:11.732723 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:11:11.734280 sudo[1593]: pam_unix(sudo:session): session closed for user root Apr 30 00:11:11.736544 sshd[1592]: Connection closed by 10.0.0.1 port 53654 Apr 30 00:11:11.736440 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Apr 30 00:11:11.753190 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:53654.service: Deactivated successfully. Apr 30 00:11:11.754615 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:11:11.755922 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:11:11.766060 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). Apr 30 00:11:11.766958 systemd-logind[1431]: Removed session 6. Apr 30 00:11:11.810247 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:11:11.812218 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:11:11.816536 systemd-logind[1431]: New session 7 of user core. Apr 30 00:11:11.827878 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:11:11.882587 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:11:11.883217 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:11:12.230959 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:11:12.231114 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:11:12.515851 dockerd[1648]: time="2025-04-30T00:11:12.515724769Z" level=info msg="Starting up" Apr 30 00:11:12.664888 dockerd[1648]: time="2025-04-30T00:11:12.664842676Z" level=info msg="Loading containers: start." Apr 30 00:11:12.825761 kernel: Initializing XFRM netlink socket Apr 30 00:11:12.899003 systemd-networkd[1376]: docker0: Link UP Apr 30 00:11:12.936197 dockerd[1648]: time="2025-04-30T00:11:12.936061414Z" level=info msg="Loading containers: done." Apr 30 00:11:12.956156 dockerd[1648]: time="2025-04-30T00:11:12.956087316Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:11:12.956320 dockerd[1648]: time="2025-04-30T00:11:12.956211709Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:11:12.956348 dockerd[1648]: time="2025-04-30T00:11:12.956322375Z" level=info msg="Daemon has completed initialization" Apr 30 00:11:12.990495 dockerd[1648]: time="2025-04-30T00:11:12.990428729Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:11:12.990642 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:11:13.746472 containerd[1452]: time="2025-04-30T00:11:13.746405033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 00:11:14.351887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684718173.mount: Deactivated successfully. Apr 30 00:11:15.439517 containerd[1452]: time="2025-04-30T00:11:15.439436498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:15.439924 containerd[1452]: time="2025-04-30T00:11:15.439885869Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" Apr 30 00:11:15.441671 containerd[1452]: time="2025-04-30T00:11:15.441630565Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:15.444633 containerd[1452]: time="2025-04-30T00:11:15.444575992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:15.445867 containerd[1452]: time="2025-04-30T00:11:15.445841229Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.699374878s" Apr 30 00:11:15.445930 containerd[1452]: time="2025-04-30T00:11:15.445876249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" Apr 30 00:11:15.446791 containerd[1452]: time="2025-04-30T00:11:15.446573173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 00:11:16.463002 containerd[1452]: time="2025-04-30T00:11:16.462955304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:16.464747 containerd[1452]: time="2025-04-30T00:11:16.464699526Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" Apr 30 00:11:16.465944 containerd[1452]: time="2025-04-30T00:11:16.465902849Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:16.468565 containerd[1452]: time="2025-04-30T00:11:16.468516093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:16.469922 containerd[1452]: time="2025-04-30T00:11:16.469799512Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.023189194s" Apr 30 00:11:16.469922 containerd[1452]: time="2025-04-30T00:11:16.469832321Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" Apr 30 00:11:16.470412 containerd[1452]: time="2025-04-30T00:11:16.470377671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 00:11:16.715271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:11:16.726892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:16.828337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:16.832698 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:11:16.871156 kubelet[1912]: E0430 00:11:16.871087 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:11:16.874458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:11:16.874625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:11:17.631187 containerd[1452]: time="2025-04-30T00:11:17.631136868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:17.633087 containerd[1452]: time="2025-04-30T00:11:17.632968938Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" Apr 30 00:11:17.634190 containerd[1452]: time="2025-04-30T00:11:17.634157141Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:17.637692 containerd[1452]: time="2025-04-30T00:11:17.637234038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:17.638538 containerd[1452]: time="2025-04-30T00:11:17.638467796Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.168056997s" Apr 30 00:11:17.638538 containerd[1452]: time="2025-04-30T00:11:17.638504649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" Apr 30 00:11:17.639179 containerd[1452]: time="2025-04-30T00:11:17.639147915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 00:11:18.608962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205863345.mount: Deactivated successfully. Apr 30 00:11:18.818564 containerd[1452]: time="2025-04-30T00:11:18.818517590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:18.819980 containerd[1452]: time="2025-04-30T00:11:18.819943449Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" Apr 30 00:11:18.821031 containerd[1452]: time="2025-04-30T00:11:18.820972448Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:18.823591 containerd[1452]: time="2025-04-30T00:11:18.823417121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:18.824109 containerd[1452]: time="2025-04-30T00:11:18.824081536Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.18489284s" Apr 30 00:11:18.824109 containerd[1452]: time="2025-04-30T00:11:18.824137508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" Apr 30 00:11:18.825079 containerd[1452]: time="2025-04-30T00:11:18.824879667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:11:19.426006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187663270.mount: Deactivated successfully. Apr 30 00:11:20.074731 containerd[1452]: time="2025-04-30T00:11:20.074407481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:20.075362 containerd[1452]: time="2025-04-30T00:11:20.075123092Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Apr 30 00:11:20.077720 containerd[1452]: time="2025-04-30T00:11:20.076269560Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:20.079732 containerd[1452]: time="2025-04-30T00:11:20.079664072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:20.081041 containerd[1452]: time="2025-04-30T00:11:20.080862810Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.255953235s" Apr 30 00:11:20.081041 containerd[1452]: time="2025-04-30T00:11:20.080895879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:11:20.081496 containerd[1452]: time="2025-04-30T00:11:20.081466427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:11:20.532852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831426162.mount: Deactivated successfully. Apr 30 00:11:20.537081 containerd[1452]: time="2025-04-30T00:11:20.537029839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:20.538076 containerd[1452]: time="2025-04-30T00:11:20.538022908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Apr 30 00:11:20.538777 containerd[1452]: time="2025-04-30T00:11:20.538747297Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:20.541001 containerd[1452]: time="2025-04-30T00:11:20.540964957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:20.541827 containerd[1452]: time="2025-04-30T00:11:20.541791159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 460.290941ms" Apr 30 00:11:20.541894 containerd[1452]: time="2025-04-30T00:11:20.541829278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:11:20.542491 containerd[1452]: time="2025-04-30T00:11:20.542302063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 00:11:21.061964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092533128.mount: Deactivated successfully. Apr 30 00:11:22.428108 containerd[1452]: time="2025-04-30T00:11:22.428061111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:22.429756 containerd[1452]: time="2025-04-30T00:11:22.429503153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Apr 30 00:11:22.431715 containerd[1452]: time="2025-04-30T00:11:22.430760135Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:22.433452 containerd[1452]: time="2025-04-30T00:11:22.433417001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:22.435238 containerd[1452]: time="2025-04-30T00:11:22.435201630Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.892867622s" Apr 30 00:11:22.435276 containerd[1452]: time="2025-04-30T00:11:22.435237776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Apr 30 00:11:26.965275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:11:26.972888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:27.107616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:27.112543 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:11:27.161637 kubelet[2065]: E0430 00:11:27.161572 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:11:27.164302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:11:27.164454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:11:27.780461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:27.793920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:27.813798 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Apr 30 00:11:27.813817 systemd[1]: Reloading... Apr 30 00:11:27.879775 zram_generator::config[2122]: No configuration found. Apr 30 00:11:28.051974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:11:28.107736 systemd[1]: Reloading finished in 293 ms. Apr 30 00:11:28.146629 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:28.150020 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:11:28.150806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:28.152886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:28.264936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:28.271626 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:11:28.317030 kubelet[2166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:11:28.317359 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:11:28.317412 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:11:28.317620 kubelet[2166]: I0430 00:11:28.317587 2166 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:11:28.903399 kubelet[2166]: I0430 00:11:28.903335 2166 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:11:28.903399 kubelet[2166]: I0430 00:11:28.903376 2166 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:11:28.903659 kubelet[2166]: I0430 00:11:28.903631 2166 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:11:28.936328 kubelet[2166]: E0430 00:11:28.936289 2166 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:28.937513 kubelet[2166]: I0430 00:11:28.937318 2166 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:11:28.943740 kubelet[2166]: E0430 00:11:28.943548 2166 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:11:28.943740 kubelet[2166]: I0430 00:11:28.943590 2166 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:11:28.946937 kubelet[2166]: I0430 00:11:28.946909 2166 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:11:28.947327 kubelet[2166]: I0430 00:11:28.947313 2166 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:11:28.947534 kubelet[2166]: I0430 00:11:28.947497 2166 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:11:28.947793 kubelet[2166]: I0430 00:11:28.947584 2166 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:11:28.948455 kubelet[2166]: I0430 00:11:28.948042 2166 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:11:28.948455 kubelet[2166]: I0430 00:11:28.948059 2166 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:11:28.948455 kubelet[2166]: I0430 00:11:28.948177 2166 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:11:28.949996 kubelet[2166]: I0430 00:11:28.949974 2166 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:11:28.950091 kubelet[2166]: I0430 00:11:28.950079 2166 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:11:28.950152 kubelet[2166]: I0430 00:11:28.950144 2166 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:11:28.950202 kubelet[2166]: I0430 00:11:28.950194 2166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:11:28.952128 kubelet[2166]: I0430 00:11:28.952112 2166 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:11:28.952675 kubelet[2166]: W0430 00:11:28.952622 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:28.953024 kubelet[2166]: W0430 00:11:28.952981 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:28.953060 kubelet[2166]: E0430 00:11:28.953037 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:28.953099 kubelet[2166]: E0430 00:11:28.953082 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:28.954695 kubelet[2166]: I0430 00:11:28.954647 2166 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:11:28.955387 kubelet[2166]: W0430 00:11:28.955354 2166 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:11:28.956482 kubelet[2166]: I0430 00:11:28.956429 2166 server.go:1269] "Started kubelet" Apr 30 00:11:28.957222 kubelet[2166]: I0430 00:11:28.957160 2166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:11:28.957456 kubelet[2166]: I0430 00:11:28.957428 2166 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:11:28.957953 kubelet[2166]: I0430 00:11:28.957921 2166 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:11:28.959041 kubelet[2166]: I0430 00:11:28.959003 2166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:11:28.959702 kubelet[2166]: I0430 00:11:28.959386 2166 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:11:28.960413 kubelet[2166]: I0430 00:11:28.960391 2166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:11:28.962651 kubelet[2166]: E0430 00:11:28.962625 2166 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:11:28.962901 kubelet[2166]: I0430 00:11:28.962888 2166 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:11:28.963220 kubelet[2166]: I0430 00:11:28.963193 2166 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:11:28.963332 kubelet[2166]: I0430 00:11:28.963322 2166 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:11:28.963815 kubelet[2166]: W0430 00:11:28.963774 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:28.963925 kubelet[2166]: E0430 00:11:28.963908 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:28.964145 kubelet[2166]: I0430 00:11:28.964127 2166 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:11:28.964279 kubelet[2166]: I0430 00:11:28.964262 2166 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:11:28.969948 kubelet[2166]: E0430 00:11:28.969917 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Apr 30 00:11:28.970638 kubelet[2166]: E0430 00:11:28.970619 2166 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:11:28.970886 kubelet[2166]: I0430 00:11:28.970859 2166 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:11:28.975868 kubelet[2166]: E0430 00:11:28.970780 2166 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183af02eb50e018c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:11:28.956244364 +0000 UTC m=+0.680930779,LastTimestamp:2025-04-30 00:11:28.956244364 +0000 UTC m=+0.680930779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:11:28.981203 kubelet[2166]: I0430 00:11:28.981166 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:11:28.982807 kubelet[2166]: I0430 00:11:28.982778 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:11:28.982807 kubelet[2166]: I0430 00:11:28.982802 2166 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:11:28.982869 kubelet[2166]: I0430 00:11:28.982823 2166 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:11:28.982890 kubelet[2166]: E0430 00:11:28.982861 2166 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:11:28.987827 kubelet[2166]: W0430 00:11:28.987753 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:28.987899 kubelet[2166]: E0430 00:11:28.987836 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:28.988037 kubelet[2166]: I0430 00:11:28.988018 2166 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:11:28.988037 kubelet[2166]: I0430 00:11:28.988037 2166 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:11:28.988095 kubelet[2166]: I0430 00:11:28.988058 2166 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:11:29.050307 kubelet[2166]: I0430 00:11:29.050255 2166 policy_none.go:49] "None policy: Start" Apr 30 00:11:29.051071 kubelet[2166]: I0430 00:11:29.051038 2166 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:11:29.051071 kubelet[2166]: I0430 00:11:29.051069 2166 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:11:29.056858 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:11:29.063205 kubelet[2166]: E0430 00:11:29.063166 2166 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:11:29.068458 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:11:29.071026 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:11:29.080639 kubelet[2166]: I0430 00:11:29.080610 2166 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:11:29.080979 kubelet[2166]: I0430 00:11:29.080960 2166 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:11:29.081662 kubelet[2166]: I0430 00:11:29.081036 2166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:11:29.081662 kubelet[2166]: I0430 00:11:29.081468 2166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:11:29.082725 kubelet[2166]: E0430 00:11:29.082702 2166 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:11:29.092892 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. Apr 30 00:11:29.107387 systemd[1]: Created slice kubepods-burstable-pod49bd362680cd7f6a870f7e75ebdf35f4.slice - libcontainer container kubepods-burstable-pod49bd362680cd7f6a870f7e75ebdf35f4.slice. Apr 30 00:11:29.121360 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. Apr 30 00:11:29.164611 kubelet[2166]: I0430 00:11:29.164492 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49bd362680cd7f6a870f7e75ebdf35f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49bd362680cd7f6a870f7e75ebdf35f4\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:29.164611 kubelet[2166]: I0430 00:11:29.164533 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:29.164611 kubelet[2166]: I0430 00:11:29.164552 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:29.164611 kubelet[2166]: I0430 00:11:29.164567 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:11:29.164611 kubelet[2166]: I0430 00:11:29.164582 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49bd362680cd7f6a870f7e75ebdf35f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bd362680cd7f6a870f7e75ebdf35f4\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:29.164852 kubelet[2166]: I0430 00:11:29.164596 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49bd362680cd7f6a870f7e75ebdf35f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bd362680cd7f6a870f7e75ebdf35f4\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:29.164852 kubelet[2166]: I0430 00:11:29.164609 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:29.164852 kubelet[2166]: I0430 00:11:29.164623 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:29.164852 kubelet[2166]: I0430 00:11:29.164639 2166 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:29.170963 kubelet[2166]: E0430 00:11:29.170916 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Apr 30 00:11:29.183033 kubelet[2166]: I0430 00:11:29.182993 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:11:29.183465 kubelet[2166]: E0430 00:11:29.183428 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Apr 30 00:11:29.385260 kubelet[2166]: I0430 00:11:29.385194 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:11:29.385784 kubelet[2166]: E0430 00:11:29.385748 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Apr 30 00:11:29.406106 kubelet[2166]: E0430 00:11:29.406070 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:29.408707 containerd[1452]: time="2025-04-30T00:11:29.408629722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Apr 30 00:11:29.420006 kubelet[2166]: E0430 00:11:29.419890 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:29.420615 containerd[1452]: time="2025-04-30T00:11:29.420384074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49bd362680cd7f6a870f7e75ebdf35f4,Namespace:kube-system,Attempt:0,}" Apr 30 00:11:29.423708 kubelet[2166]: E0430 00:11:29.423671 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:29.424430 containerd[1452]: time="2025-04-30T00:11:29.424154272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Apr 30 00:11:29.572194 kubelet[2166]: E0430 00:11:29.572128 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Apr 30 00:11:29.787180 kubelet[2166]: I0430 00:11:29.787139 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:11:29.787505 kubelet[2166]: E0430 00:11:29.787467 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Apr 30 00:11:29.849219 kubelet[2166]: W0430 00:11:29.849125 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:29.849219 kubelet[2166]: E0430 00:11:29.849217 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:29.977838 kubelet[2166]: W0430 00:11:29.977759 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:29.977838 kubelet[2166]: E0430 00:11:29.977828 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:30.170925 kubelet[2166]: W0430 00:11:30.170769 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:30.170925 kubelet[2166]: E0430 00:11:30.170842 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:30.240619 kubelet[2166]: W0430 00:11:30.240556 2166 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Apr 30 00:11:30.240763 kubelet[2166]: E0430 00:11:30.240629 2166 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:11:30.241474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802824348.mount: Deactivated successfully. Apr 30 00:11:30.246056 containerd[1452]: time="2025-04-30T00:11:30.246007758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:11:30.247615 containerd[1452]: time="2025-04-30T00:11:30.247546641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Apr 30 00:11:30.248234 containerd[1452]: time="2025-04-30T00:11:30.248169082Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:11:30.249822 containerd[1452]: time="2025-04-30T00:11:30.249785970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:11:30.253653 containerd[1452]: time="2025-04-30T00:11:30.253557816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:11:30.255909 containerd[1452]: time="2025-04-30T00:11:30.255845558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:11:30.256964 containerd[1452]: time="2025-04-30T00:11:30.256911644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 848.178563ms" Apr 30 00:11:30.257358 containerd[1452]: time="2025-04-30T00:11:30.257315806Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:11:30.257623 containerd[1452]: time="2025-04-30T00:11:30.257581457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:11:30.259893 containerd[1452]: time="2025-04-30T00:11:30.259725562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 839.26556ms" Apr 30 00:11:30.297708 containerd[1452]: time="2025-04-30T00:11:30.297599987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 873.373191ms" Apr 30 00:11:30.378095 kubelet[2166]: E0430 00:11:30.378025 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="1.6s" Apr 30 00:11:30.463094 containerd[1452]: time="2025-04-30T00:11:30.462879483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:30.463442 containerd[1452]: time="2025-04-30T00:11:30.463037656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:30.464230 containerd[1452]: time="2025-04-30T00:11:30.463799409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:30.464230 containerd[1452]: time="2025-04-30T00:11:30.463972638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:30.464230 containerd[1452]: time="2025-04-30T00:11:30.464042515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:30.464230 containerd[1452]: time="2025-04-30T00:11:30.464059093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:30.464230 containerd[1452]: time="2025-04-30T00:11:30.464128729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:30.464230 containerd[1452]: time="2025-04-30T00:11:30.464159363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:30.464883 containerd[1452]: time="2025-04-30T00:11:30.464794457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:30.464883 containerd[1452]: time="2025-04-30T00:11:30.464857246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:30.465079 containerd[1452]: time="2025-04-30T00:11:30.465030556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:30.465864 containerd[1452]: time="2025-04-30T00:11:30.465799116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:30.492944 systemd[1]: Started cri-containerd-0ea24e47b3292f8979d40c54be663e397fc28113d5cb6bda59650de956466f31.scope - libcontainer container 0ea24e47b3292f8979d40c54be663e397fc28113d5cb6bda59650de956466f31. Apr 30 00:11:30.494478 systemd[1]: Started cri-containerd-4b85a128413ffe1c13dcf7cbaf76a30db4a6c57ffef0b91f537b19d47ac880f6.scope - libcontainer container 4b85a128413ffe1c13dcf7cbaf76a30db4a6c57ffef0b91f537b19d47ac880f6. Apr 30 00:11:30.496882 systemd[1]: Started cri-containerd-eeafe22fa29ad2bb2c403e3d8c6f6cc1970bdcd0da5cb4a0869f6ceae87d116a.scope - libcontainer container eeafe22fa29ad2bb2c403e3d8c6f6cc1970bdcd0da5cb4a0869f6ceae87d116a. Apr 30 00:11:30.543575 containerd[1452]: time="2025-04-30T00:11:30.542984538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ea24e47b3292f8979d40c54be663e397fc28113d5cb6bda59650de956466f31\"" Apr 30 00:11:30.547059 containerd[1452]: time="2025-04-30T00:11:30.544720557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49bd362680cd7f6a870f7e75ebdf35f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b85a128413ffe1c13dcf7cbaf76a30db4a6c57ffef0b91f537b19d47ac880f6\"" Apr 30 00:11:30.547059 containerd[1452]: time="2025-04-30T00:11:30.546589882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeafe22fa29ad2bb2c403e3d8c6f6cc1970bdcd0da5cb4a0869f6ceae87d116a\"" Apr 30 00:11:30.547174 kubelet[2166]: E0430 00:11:30.546845 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:30.547174 kubelet[2166]: E0430 00:11:30.547065 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:30.548728 kubelet[2166]: E0430 00:11:30.548667 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:30.550071 containerd[1452]: time="2025-04-30T00:11:30.550018752Z" level=info msg="CreateContainer within sandbox \"eeafe22fa29ad2bb2c403e3d8c6f6cc1970bdcd0da5cb4a0869f6ceae87d116a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:11:30.550171 containerd[1452]: time="2025-04-30T00:11:30.550154981Z" level=info msg="CreateContainer within sandbox \"0ea24e47b3292f8979d40c54be663e397fc28113d5cb6bda59650de956466f31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:11:30.550768 containerd[1452]: time="2025-04-30T00:11:30.550743304Z" level=info msg="CreateContainer within sandbox \"4b85a128413ffe1c13dcf7cbaf76a30db4a6c57ffef0b91f537b19d47ac880f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:11:30.568385 containerd[1452]: time="2025-04-30T00:11:30.568329179Z" level=info msg="CreateContainer within sandbox \"0ea24e47b3292f8979d40c54be663e397fc28113d5cb6bda59650de956466f31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f7003e7a42af64377916c73b80e60b131c42669b9c374a0f8305ae6cbbea998\"" Apr 30 00:11:30.569146 containerd[1452]: time="2025-04-30T00:11:30.569104747Z" level=info msg="StartContainer for \"6f7003e7a42af64377916c73b80e60b131c42669b9c374a0f8305ae6cbbea998\"" Apr 30 00:11:30.573101 containerd[1452]: time="2025-04-30T00:11:30.573034125Z" level=info msg="CreateContainer within sandbox \"4b85a128413ffe1c13dcf7cbaf76a30db4a6c57ffef0b91f537b19d47ac880f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb110c849f36eae1456d0be774b94b5fe59c9f4abc86bc654606273005dd6e0a\"" Apr 30 00:11:30.573503 containerd[1452]: time="2025-04-30T00:11:30.573478451Z" level=info msg="StartContainer for \"bb110c849f36eae1456d0be774b94b5fe59c9f4abc86bc654606273005dd6e0a\"" Apr 30 00:11:30.574610 containerd[1452]: time="2025-04-30T00:11:30.574525917Z" level=info msg="CreateContainer within sandbox \"eeafe22fa29ad2bb2c403e3d8c6f6cc1970bdcd0da5cb4a0869f6ceae87d116a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7b6362e7b4034b2755b1f5c5b523aa9a9d0b9e302e5e44980ccfa993022d3b0\"" Apr 30 00:11:30.574966 containerd[1452]: time="2025-04-30T00:11:30.574935845Z" level=info msg="StartContainer for \"d7b6362e7b4034b2755b1f5c5b523aa9a9d0b9e302e5e44980ccfa993022d3b0\"" Apr 30 00:11:30.592055 kubelet[2166]: I0430 00:11:30.591914 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:11:30.592751 kubelet[2166]: E0430 00:11:30.592583 2166 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Apr 30 00:11:30.605889 systemd[1]: Started cri-containerd-d7b6362e7b4034b2755b1f5c5b523aa9a9d0b9e302e5e44980ccfa993022d3b0.scope - libcontainer container d7b6362e7b4034b2755b1f5c5b523aa9a9d0b9e302e5e44980ccfa993022d3b0. Apr 30 00:11:30.610211 systemd[1]: Started cri-containerd-6f7003e7a42af64377916c73b80e60b131c42669b9c374a0f8305ae6cbbea998.scope - libcontainer container 6f7003e7a42af64377916c73b80e60b131c42669b9c374a0f8305ae6cbbea998. Apr 30 00:11:30.611874 systemd[1]: Started cri-containerd-bb110c849f36eae1456d0be774b94b5fe59c9f4abc86bc654606273005dd6e0a.scope - libcontainer container bb110c849f36eae1456d0be774b94b5fe59c9f4abc86bc654606273005dd6e0a. Apr 30 00:11:30.698256 containerd[1452]: time="2025-04-30T00:11:30.696615573Z" level=info msg="StartContainer for \"6f7003e7a42af64377916c73b80e60b131c42669b9c374a0f8305ae6cbbea998\" returns successfully" Apr 30 00:11:30.698256 containerd[1452]: time="2025-04-30T00:11:30.696635996Z" level=info msg="StartContainer for \"bb110c849f36eae1456d0be774b94b5fe59c9f4abc86bc654606273005dd6e0a\" returns successfully" Apr 30 00:11:30.698256 containerd[1452]: time="2025-04-30T00:11:30.696642443Z" level=info msg="StartContainer for \"d7b6362e7b4034b2755b1f5c5b523aa9a9d0b9e302e5e44980ccfa993022d3b0\" returns successfully" Apr 30 00:11:30.993732 kubelet[2166]: E0430 00:11:30.993424 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:30.996371 kubelet[2166]: E0430 00:11:30.996222 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:30.997993 kubelet[2166]: E0430 00:11:30.997880 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:32.002373 kubelet[2166]: E0430 00:11:32.002293 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:32.194014 kubelet[2166]: I0430 00:11:32.193947 2166 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:11:32.808299 kubelet[2166]: E0430 00:11:32.808268 2166 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 00:11:32.952962 kubelet[2166]: I0430 00:11:32.952895 2166 apiserver.go:52] "Watching apiserver" Apr 30 00:11:32.963414 kubelet[2166]: I0430 00:11:32.963377 2166 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:11:32.967326 kubelet[2166]: I0430 00:11:32.967078 2166 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 00:11:34.733770 systemd[1]: Reloading requested from client PID 2451 ('systemctl') (unit session-7.scope)... Apr 30 00:11:34.733785 systemd[1]: Reloading... Apr 30 00:11:34.801833 zram_generator::config[2490]: No configuration found. Apr 30 00:11:34.962434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:11:35.029642 systemd[1]: Reloading finished in 295 ms. Apr 30 00:11:35.068279 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:35.083186 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:11:35.083396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:35.083454 systemd[1]: kubelet.service: Consumed 1.079s CPU time, 115.6M memory peak, 0B memory swap peak. Apr 30 00:11:35.091396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:11:35.187480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:11:35.188437 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:11:35.224771 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:11:35.224771 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:11:35.224771 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:11:35.225666 kubelet[2531]: I0430 00:11:35.225239 2531 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:11:35.234241 kubelet[2531]: I0430 00:11:35.234207 2531 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:11:35.234411 kubelet[2531]: I0430 00:11:35.234401 2531 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:11:35.234958 kubelet[2531]: I0430 00:11:35.234942 2531 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:11:35.237288 kubelet[2531]: I0430 00:11:35.236921 2531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:11:35.239010 kubelet[2531]: I0430 00:11:35.238986 2531 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:11:35.241941 kubelet[2531]: E0430 00:11:35.241891 2531 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:11:35.241941 kubelet[2531]: I0430 00:11:35.241936 2531 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:11:35.244343 kubelet[2531]: I0430 00:11:35.244314 2531 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:11:35.244467 kubelet[2531]: I0430 00:11:35.244441 2531 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:11:35.244567 kubelet[2531]: I0430 00:11:35.244532 2531 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:11:35.244768 kubelet[2531]: I0430 00:11:35.244559 2531 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:11:35.244844 kubelet[2531]: I0430 00:11:35.244772 2531 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:11:35.244844 kubelet[2531]: I0430 00:11:35.244782 2531 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:11:35.244844 kubelet[2531]: I0430 00:11:35.244810 2531 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:11:35.244918 kubelet[2531]: I0430 00:11:35.244908 2531 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:11:35.244946 kubelet[2531]: I0430 00:11:35.244923 2531 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:11:35.244946 kubelet[2531]: I0430 00:11:35.244942 2531 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:11:35.244985 kubelet[2531]: I0430 00:11:35.244950 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:11:35.246308 kubelet[2531]: I0430 00:11:35.246012 2531 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:11:35.246472 kubelet[2531]: I0430 00:11:35.246445 2531 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:11:35.246906 kubelet[2531]: I0430 00:11:35.246884 2531 server.go:1269] "Started kubelet" Apr 30 00:11:35.248426 kubelet[2531]: I0430 00:11:35.248386 2531 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:11:35.248962 kubelet[2531]: I0430 00:11:35.248938 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:11:35.250335 kubelet[2531]: I0430 00:11:35.250194 2531 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:11:35.250506 kubelet[2531]: E0430 00:11:35.250371 2531 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:11:35.250543 kubelet[2531]: I0430 00:11:35.250528 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:11:35.253735 kubelet[2531]: I0430 00:11:35.250958 2531 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:11:35.253735 kubelet[2531]: I0430 00:11:35.251207 2531 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:11:35.256907 kubelet[2531]: I0430 00:11:35.254509 2531 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:11:35.262944 kubelet[2531]: I0430 00:11:35.257487 2531 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:11:35.263444 kubelet[2531]: I0430 00:11:35.263167 2531 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:11:35.263986 kubelet[2531]: I0430 00:11:35.258136 2531 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:11:35.264212 kubelet[2531]: I0430 00:11:35.264193 2531 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:11:35.267181 kubelet[2531]: I0430 00:11:35.267143 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:11:35.268162 kubelet[2531]: E0430 00:11:35.268135 2531 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:11:35.269757 kubelet[2531]: I0430 00:11:35.269547 2531 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:11:35.269918 kubelet[2531]: I0430 00:11:35.269884 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:11:35.269918 kubelet[2531]: I0430 00:11:35.269914 2531 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:11:35.269993 kubelet[2531]: I0430 00:11:35.269934 2531 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:11:35.269993 kubelet[2531]: E0430 00:11:35.269975 2531 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:11:35.302470 kubelet[2531]: I0430 00:11:35.302367 2531 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:11:35.302470 kubelet[2531]: I0430 00:11:35.302387 2531 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:11:35.302470 kubelet[2531]: I0430 00:11:35.302408 2531 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:11:35.302605 kubelet[2531]: I0430 00:11:35.302562 2531 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:11:35.302605 kubelet[2531]: I0430 00:11:35.302572 2531 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:11:35.302605 kubelet[2531]: I0430 00:11:35.302594 2531 policy_none.go:49] "None policy: Start" Apr 30 00:11:35.303247 kubelet[2531]: I0430 00:11:35.303206 2531 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:11:35.303292 kubelet[2531]: I0430 00:11:35.303275 2531 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:11:35.303466 kubelet[2531]: I0430 00:11:35.303419 2531 state_mem.go:75] "Updated machine memory state" Apr 30 00:11:35.309707 kubelet[2531]: I0430 00:11:35.309665 2531 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:11:35.310253 kubelet[2531]: I0430 00:11:35.310205 2531 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:11:35.310411 kubelet[2531]: I0430 00:11:35.310221 2531 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:11:35.310509 kubelet[2531]: I0430 00:11:35.310488 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:11:35.415540 kubelet[2531]: I0430 00:11:35.415512 2531 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:11:35.421626 kubelet[2531]: I0430 00:11:35.421595 2531 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Apr 30 00:11:35.421756 kubelet[2531]: I0430 00:11:35.421714 2531 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 00:11:35.452669 kubelet[2531]: I0430 00:11:35.452434 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49bd362680cd7f6a870f7e75ebdf35f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bd362680cd7f6a870f7e75ebdf35f4\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:35.452669 kubelet[2531]: I0430 00:11:35.452476 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49bd362680cd7f6a870f7e75ebdf35f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bd362680cd7f6a870f7e75ebdf35f4\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:35.452669 kubelet[2531]: I0430 00:11:35.452496 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49bd362680cd7f6a870f7e75ebdf35f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49bd362680cd7f6a870f7e75ebdf35f4\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:35.452669 kubelet[2531]: I0430 00:11:35.452511 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:35.452669 kubelet[2531]: I0430 00:11:35.452526 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:35.452915 kubelet[2531]: I0430 00:11:35.452540 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:35.452915 kubelet[2531]: I0430 00:11:35.452557 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:35.452915 kubelet[2531]: I0430 00:11:35.452572 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:35.452915 kubelet[2531]: I0430 00:11:35.452588 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:11:35.682430 kubelet[2531]: E0430 00:11:35.682313 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:35.682430 kubelet[2531]: E0430 00:11:35.682315 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:35.682986 kubelet[2531]: E0430 00:11:35.682513 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:36.245630 kubelet[2531]: I0430 00:11:36.245573 2531 apiserver.go:52] "Watching apiserver" Apr 30 00:11:36.251499 kubelet[2531]: I0430 00:11:36.251449 2531 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:11:36.291184 kubelet[2531]: E0430 00:11:36.291137 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:36.307956 kubelet[2531]: E0430 00:11:36.307921 2531 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:11:36.308064 kubelet[2531]: E0430 00:11:36.307933 2531 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:11:36.308498 kubelet[2531]: E0430 00:11:36.308130 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:36.308498 kubelet[2531]: E0430 00:11:36.308165 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:36.308498 kubelet[2531]: I0430 00:11:36.308309 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3082997029999999 podStartE2EDuration="1.308299703s" podCreationTimestamp="2025-04-30 00:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:11:36.306755516 +0000 UTC m=+1.115159793" watchObservedRunningTime="2025-04-30 00:11:36.308299703 +0000 UTC m=+1.116703980" Apr 30 00:11:36.324456 kubelet[2531]: I0430 00:11:36.323835 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.323818792 podStartE2EDuration="1.323818792s" podCreationTimestamp="2025-04-30 00:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:11:36.322554132 +0000 UTC m=+1.130958409" watchObservedRunningTime="2025-04-30 00:11:36.323818792 +0000 UTC m=+1.132223069" Apr 30 00:11:36.336037 kubelet[2531]: I0430 00:11:36.335588 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.335570441 podStartE2EDuration="1.335570441s" podCreationTimestamp="2025-04-30 00:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:11:36.335358124 +0000 UTC m=+1.143762441" watchObservedRunningTime="2025-04-30 00:11:36.335570441 +0000 UTC m=+1.143974718" Apr 30 00:11:37.296165 kubelet[2531]: E0430 00:11:37.293629 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:37.296165 kubelet[2531]: E0430 00:11:37.293913 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:39.737103 kubelet[2531]: I0430 00:11:39.737075 2531 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:11:39.739560 containerd[1452]: time="2025-04-30T00:11:39.739468865Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:11:39.739836 kubelet[2531]: I0430 00:11:39.739716 2531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:11:39.929954 kubelet[2531]: E0430 00:11:39.929902 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:40.065669 sudo[1628]: pam_unix(sudo:session): session closed for user root Apr 30 00:11:40.066980 sshd[1627]: Connection closed by 10.0.0.1 port 53668 Apr 30 00:11:40.068486 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Apr 30 00:11:40.071588 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:11:40.072000 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:53668.service: Deactivated successfully. Apr 30 00:11:40.074948 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:11:40.075266 systemd[1]: session-7.scope: Consumed 7.413s CPU time, 153.6M memory peak, 0B memory swap peak. Apr 30 00:11:40.076114 systemd-logind[1431]: Removed session 7. Apr 30 00:11:40.756558 systemd[1]: Created slice kubepods-besteffort-pod93c3f461_624e_4ea9_8446_ffe1345ad80a.slice - libcontainer container kubepods-besteffort-pod93c3f461_624e_4ea9_8446_ffe1345ad80a.slice. Apr 30 00:11:40.795249 kubelet[2531]: I0430 00:11:40.794823 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93c3f461-624e-4ea9-8446-ffe1345ad80a-xtables-lock\") pod \"kube-proxy-q77wb\" (UID: \"93c3f461-624e-4ea9-8446-ffe1345ad80a\") " pod="kube-system/kube-proxy-q77wb" Apr 30 00:11:40.795249 kubelet[2531]: I0430 00:11:40.794864 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93c3f461-624e-4ea9-8446-ffe1345ad80a-lib-modules\") pod \"kube-proxy-q77wb\" (UID: \"93c3f461-624e-4ea9-8446-ffe1345ad80a\") " pod="kube-system/kube-proxy-q77wb" Apr 30 00:11:40.795249 kubelet[2531]: I0430 00:11:40.794895 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93c3f461-624e-4ea9-8446-ffe1345ad80a-kube-proxy\") pod \"kube-proxy-q77wb\" (UID: \"93c3f461-624e-4ea9-8446-ffe1345ad80a\") " pod="kube-system/kube-proxy-q77wb" Apr 30 00:11:40.795249 kubelet[2531]: I0430 00:11:40.794914 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6sv\" (UniqueName: \"kubernetes.io/projected/93c3f461-624e-4ea9-8446-ffe1345ad80a-kube-api-access-8d6sv\") pod \"kube-proxy-q77wb\" (UID: \"93c3f461-624e-4ea9-8446-ffe1345ad80a\") " pod="kube-system/kube-proxy-q77wb" Apr 30 00:11:40.816091 systemd[1]: Created slice kubepods-besteffort-podedbfa0da_fe89_4a7c_8012_b6fe553402be.slice - libcontainer container kubepods-besteffort-podedbfa0da_fe89_4a7c_8012_b6fe553402be.slice. Apr 30 00:11:40.886430 kubelet[2531]: E0430 00:11:40.886114 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:40.895032 kubelet[2531]: I0430 00:11:40.895002 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/edbfa0da-fe89-4a7c-8012-b6fe553402be-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-s9dqd\" (UID: \"edbfa0da-fe89-4a7c-8012-b6fe553402be\") " pod="tigera-operator/tigera-operator-6f6897fdc5-s9dqd" Apr 30 00:11:40.895169 kubelet[2531]: I0430 00:11:40.895057 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgpwz\" (UniqueName: \"kubernetes.io/projected/edbfa0da-fe89-4a7c-8012-b6fe553402be-kube-api-access-cgpwz\") pod \"tigera-operator-6f6897fdc5-s9dqd\" (UID: \"edbfa0da-fe89-4a7c-8012-b6fe553402be\") " pod="tigera-operator/tigera-operator-6f6897fdc5-s9dqd" Apr 30 00:11:41.069142 kubelet[2531]: E0430 00:11:41.069013 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:41.070911 containerd[1452]: time="2025-04-30T00:11:41.070266900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q77wb,Uid:93c3f461-624e-4ea9-8446-ffe1345ad80a,Namespace:kube-system,Attempt:0,}" Apr 30 00:11:41.095967 containerd[1452]: time="2025-04-30T00:11:41.095876721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:41.096407 containerd[1452]: time="2025-04-30T00:11:41.095939034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:41.096407 containerd[1452]: time="2025-04-30T00:11:41.096257686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:41.096407 containerd[1452]: time="2025-04-30T00:11:41.096357779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:41.120017 containerd[1452]: time="2025-04-30T00:11:41.119967403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-s9dqd,Uid:edbfa0da-fe89-4a7c-8012-b6fe553402be,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:11:41.124970 systemd[1]: Started cri-containerd-c6dc4cbce401726297844741df9f75f368958d78d3cb6876703ee7f1c26a9d33.scope - libcontainer container c6dc4cbce401726297844741df9f75f368958d78d3cb6876703ee7f1c26a9d33. Apr 30 00:11:41.141491 containerd[1452]: time="2025-04-30T00:11:41.141403377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:41.141491 containerd[1452]: time="2025-04-30T00:11:41.141461369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:41.141491 containerd[1452]: time="2025-04-30T00:11:41.141477497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:41.141741 containerd[1452]: time="2025-04-30T00:11:41.141555539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:41.144262 containerd[1452]: time="2025-04-30T00:11:41.144141211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q77wb,Uid:93c3f461-624e-4ea9-8446-ffe1345ad80a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6dc4cbce401726297844741df9f75f368958d78d3cb6876703ee7f1c26a9d33\"" Apr 30 00:11:41.144931 kubelet[2531]: E0430 00:11:41.144907 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:41.148186 containerd[1452]: time="2025-04-30T00:11:41.148148487Z" level=info msg="CreateContainer within sandbox \"c6dc4cbce401726297844741df9f75f368958d78d3cb6876703ee7f1c26a9d33\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:11:41.164839 containerd[1452]: time="2025-04-30T00:11:41.164789801Z" level=info msg="CreateContainer within sandbox \"c6dc4cbce401726297844741df9f75f368958d78d3cb6876703ee7f1c26a9d33\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f3e7d7320f2684509973e3891e21144dfe4346647ac2f205c89d28e335e2628f\"" Apr 30 00:11:41.165467 containerd[1452]: time="2025-04-30T00:11:41.165439231Z" level=info msg="StartContainer for \"f3e7d7320f2684509973e3891e21144dfe4346647ac2f205c89d28e335e2628f\"" Apr 30 00:11:41.167946 systemd[1]: Started cri-containerd-a9e8d0ec6f1e66bc9ccf1dd674388b593ec136304109ee2138e2a55031d18927.scope - libcontainer container a9e8d0ec6f1e66bc9ccf1dd674388b593ec136304109ee2138e2a55031d18927. Apr 30 00:11:41.201470 systemd[1]: Started cri-containerd-f3e7d7320f2684509973e3891e21144dfe4346647ac2f205c89d28e335e2628f.scope - libcontainer container f3e7d7320f2684509973e3891e21144dfe4346647ac2f205c89d28e335e2628f. Apr 30 00:11:41.214447 containerd[1452]: time="2025-04-30T00:11:41.212459571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-s9dqd,Uid:edbfa0da-fe89-4a7c-8012-b6fe553402be,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a9e8d0ec6f1e66bc9ccf1dd674388b593ec136304109ee2138e2a55031d18927\"" Apr 30 00:11:41.215196 containerd[1452]: time="2025-04-30T00:11:41.215162786Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:11:41.244765 containerd[1452]: time="2025-04-30T00:11:41.244712886Z" level=info msg="StartContainer for \"f3e7d7320f2684509973e3891e21144dfe4346647ac2f205c89d28e335e2628f\" returns successfully" Apr 30 00:11:41.307751 kubelet[2531]: E0430 00:11:41.307723 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:41.307751 kubelet[2531]: E0430 00:11:41.307749 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:41.319259 kubelet[2531]: I0430 00:11:41.318978 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q77wb" podStartSLOduration=1.318960477 podStartE2EDuration="1.318960477s" podCreationTimestamp="2025-04-30 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:11:41.31807392 +0000 UTC m=+6.126478197" watchObservedRunningTime="2025-04-30 00:11:41.318960477 +0000 UTC m=+6.127364714" Apr 30 00:11:44.335978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026298501.mount: Deactivated successfully. Apr 30 00:11:45.661436 kubelet[2531]: E0430 00:11:45.661013 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:46.319990 kubelet[2531]: E0430 00:11:46.319949 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:46.743229 containerd[1452]: time="2025-04-30T00:11:46.743182678Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:46.745096 containerd[1452]: time="2025-04-30T00:11:46.745046085Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:11:46.745972 containerd[1452]: time="2025-04-30T00:11:46.745948436Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:46.748177 containerd[1452]: time="2025-04-30T00:11:46.748141411Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:46.749476 containerd[1452]: time="2025-04-30T00:11:46.748906309Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 5.533706585s" Apr 30 00:11:46.749476 containerd[1452]: time="2025-04-30T00:11:46.748938962Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:11:46.751349 containerd[1452]: time="2025-04-30T00:11:46.751307925Z" level=info msg="CreateContainer within sandbox \"a9e8d0ec6f1e66bc9ccf1dd674388b593ec136304109ee2138e2a55031d18927\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:11:46.779248 containerd[1452]: time="2025-04-30T00:11:46.779157819Z" level=info msg="CreateContainer within sandbox \"a9e8d0ec6f1e66bc9ccf1dd674388b593ec136304109ee2138e2a55031d18927\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10725b4505dd415f9d7f8a40ef7e641fb123d42884bcea1274ab4aa588660d83\"" Apr 30 00:11:46.780825 containerd[1452]: time="2025-04-30T00:11:46.779742807Z" level=info msg="StartContainer for \"10725b4505dd415f9d7f8a40ef7e641fb123d42884bcea1274ab4aa588660d83\"" Apr 30 00:11:46.806892 systemd[1]: Started cri-containerd-10725b4505dd415f9d7f8a40ef7e641fb123d42884bcea1274ab4aa588660d83.scope - libcontainer container 10725b4505dd415f9d7f8a40ef7e641fb123d42884bcea1274ab4aa588660d83. Apr 30 00:11:46.898086 containerd[1452]: time="2025-04-30T00:11:46.898036310Z" level=info msg="StartContainer for \"10725b4505dd415f9d7f8a40ef7e641fb123d42884bcea1274ab4aa588660d83\" returns successfully" Apr 30 00:11:49.108376 update_engine[1435]: I20250430 00:11:49.108294 1435 update_attempter.cc:509] Updating boot flags... Apr 30 00:11:49.130769 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2922) Apr 30 00:11:49.178568 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2922) Apr 30 00:11:49.203851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2922) Apr 30 00:11:49.938319 kubelet[2531]: E0430 00:11:49.938284 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:49.948844 kubelet[2531]: I0430 00:11:49.948781 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-s9dqd" podStartSLOduration=4.412951422 podStartE2EDuration="9.948765942s" podCreationTimestamp="2025-04-30 00:11:40 +0000 UTC" firstStartedPulling="2025-04-30 00:11:41.214153603 +0000 UTC m=+6.022557880" lastFinishedPulling="2025-04-30 00:11:46.749968123 +0000 UTC m=+11.558372400" observedRunningTime="2025-04-30 00:11:47.331469669 +0000 UTC m=+12.139873946" watchObservedRunningTime="2025-04-30 00:11:49.948765942 +0000 UTC m=+14.757170219" Apr 30 00:11:50.647141 systemd[1]: Created slice kubepods-besteffort-pod38d4edf8_b112_4068_affa_7280fc68d89b.slice - libcontainer container kubepods-besteffort-pod38d4edf8_b112_4068_affa_7280fc68d89b.slice. Apr 30 00:11:50.684736 kubelet[2531]: I0430 00:11:50.684638 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/38d4edf8-b112-4068-affa-7280fc68d89b-typha-certs\") pod \"calico-typha-68dcfbd854-4dr2b\" (UID: \"38d4edf8-b112-4068-affa-7280fc68d89b\") " pod="calico-system/calico-typha-68dcfbd854-4dr2b" Apr 30 00:11:50.684736 kubelet[2531]: I0430 00:11:50.684696 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38d4edf8-b112-4068-affa-7280fc68d89b-tigera-ca-bundle\") pod \"calico-typha-68dcfbd854-4dr2b\" (UID: \"38d4edf8-b112-4068-affa-7280fc68d89b\") " pod="calico-system/calico-typha-68dcfbd854-4dr2b" Apr 30 00:11:50.684736 kubelet[2531]: I0430 00:11:50.684720 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr7nv\" (UniqueName: \"kubernetes.io/projected/38d4edf8-b112-4068-affa-7280fc68d89b-kube-api-access-gr7nv\") pod \"calico-typha-68dcfbd854-4dr2b\" (UID: \"38d4edf8-b112-4068-affa-7280fc68d89b\") " pod="calico-system/calico-typha-68dcfbd854-4dr2b" Apr 30 00:11:50.715577 systemd[1]: Created slice kubepods-besteffort-pod0149cbe4_2854_4414_9781_98b071bef220.slice - libcontainer container kubepods-besteffort-pod0149cbe4_2854_4414_9781_98b071bef220.slice. Apr 30 00:11:50.785948 kubelet[2531]: I0430 00:11:50.785908 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-xtables-lock\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.785948 kubelet[2531]: I0430 00:11:50.785953 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-cni-net-dir\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786113 kubelet[2531]: I0430 00:11:50.785972 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0149cbe4-2854-4414-9781-98b071bef220-node-certs\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786113 kubelet[2531]: I0430 00:11:50.785989 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-var-lib-calico\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786113 kubelet[2531]: I0430 00:11:50.786005 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-cni-log-dir\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786113 kubelet[2531]: I0430 00:11:50.786044 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-policysync\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786113 kubelet[2531]: I0430 00:11:50.786067 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-flexvol-driver-host\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786301 kubelet[2531]: I0430 00:11:50.786085 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nqcx\" (UniqueName: \"kubernetes.io/projected/0149cbe4-2854-4414-9781-98b071bef220-kube-api-access-4nqcx\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786301 kubelet[2531]: I0430 00:11:50.786110 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-var-run-calico\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786301 kubelet[2531]: I0430 00:11:50.786133 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0149cbe4-2854-4414-9781-98b071bef220-tigera-ca-bundle\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786301 kubelet[2531]: I0430 00:11:50.786151 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-cni-bin-dir\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.786301 kubelet[2531]: I0430 00:11:50.786166 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0149cbe4-2854-4414-9781-98b071bef220-lib-modules\") pod \"calico-node-l625z\" (UID: \"0149cbe4-2854-4414-9781-98b071bef220\") " pod="calico-system/calico-node-l625z" Apr 30 00:11:50.832395 kubelet[2531]: E0430 00:11:50.832284 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:11:50.886850 kubelet[2531]: I0430 00:11:50.886804 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c26bbdc1-7849-41b7-afe2-12b01cd7b775-registration-dir\") pod \"csi-node-driver-h9r9d\" (UID: \"c26bbdc1-7849-41b7-afe2-12b01cd7b775\") " pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:11:50.886984 kubelet[2531]: I0430 00:11:50.886872 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c26bbdc1-7849-41b7-afe2-12b01cd7b775-kubelet-dir\") pod \"csi-node-driver-h9r9d\" (UID: \"c26bbdc1-7849-41b7-afe2-12b01cd7b775\") " pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:11:50.886984 kubelet[2531]: I0430 00:11:50.886892 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c26bbdc1-7849-41b7-afe2-12b01cd7b775-socket-dir\") pod \"csi-node-driver-h9r9d\" (UID: \"c26bbdc1-7849-41b7-afe2-12b01cd7b775\") " pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:11:50.886984 kubelet[2531]: I0430 00:11:50.886911 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzwp\" (UniqueName: \"kubernetes.io/projected/c26bbdc1-7849-41b7-afe2-12b01cd7b775-kube-api-access-5dzwp\") pod \"csi-node-driver-h9r9d\" (UID: \"c26bbdc1-7849-41b7-afe2-12b01cd7b775\") " pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:11:50.886984 kubelet[2531]: I0430 00:11:50.886949 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c26bbdc1-7849-41b7-afe2-12b01cd7b775-varrun\") pod \"csi-node-driver-h9r9d\" (UID: \"c26bbdc1-7849-41b7-afe2-12b01cd7b775\") " pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:11:50.897533 kubelet[2531]: E0430 00:11:50.897252 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.897533 kubelet[2531]: W0430 00:11:50.897285 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.897533 kubelet[2531]: E0430 00:11:50.897325 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.897834 kubelet[2531]: E0430 00:11:50.897609 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.897834 kubelet[2531]: W0430 00:11:50.897620 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.897834 kubelet[2531]: E0430 00:11:50.897631 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.900159 kubelet[2531]: E0430 00:11:50.900136 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.900159 kubelet[2531]: W0430 00:11:50.900156 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.900264 kubelet[2531]: E0430 00:11:50.900172 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.957741 kubelet[2531]: E0430 00:11:50.957658 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:50.966516 containerd[1452]: time="2025-04-30T00:11:50.966454456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68dcfbd854-4dr2b,Uid:38d4edf8-b112-4068-affa-7280fc68d89b,Namespace:calico-system,Attempt:0,}" Apr 30 00:11:50.987656 containerd[1452]: time="2025-04-30T00:11:50.987191899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:50.987656 containerd[1452]: time="2025-04-30T00:11:50.987243915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:50.987656 containerd[1452]: time="2025-04-30T00:11:50.987254158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:50.987656 containerd[1452]: time="2025-04-30T00:11:50.987329621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:50.988066 kubelet[2531]: E0430 00:11:50.987943 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.988066 kubelet[2531]: W0430 00:11:50.987962 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.988066 kubelet[2531]: E0430 00:11:50.987982 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.988534 kubelet[2531]: E0430 00:11:50.988440 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.988534 kubelet[2531]: W0430 00:11:50.988456 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.988534 kubelet[2531]: E0430 00:11:50.988476 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.988934 kubelet[2531]: E0430 00:11:50.988854 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.988934 kubelet[2531]: W0430 00:11:50.988867 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.988934 kubelet[2531]: E0430 00:11:50.988883 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.989304 kubelet[2531]: E0430 00:11:50.989218 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.989304 kubelet[2531]: W0430 00:11:50.989230 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.989304 kubelet[2531]: E0430 00:11:50.989246 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.989598 kubelet[2531]: E0430 00:11:50.989540 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.989598 kubelet[2531]: W0430 00:11:50.989551 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.989598 kubelet[2531]: E0430 00:11:50.989581 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.989941 kubelet[2531]: E0430 00:11:50.989866 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.989941 kubelet[2531]: W0430 00:11:50.989878 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.989941 kubelet[2531]: E0430 00:11:50.989923 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.990281 kubelet[2531]: E0430 00:11:50.990222 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.990281 kubelet[2531]: W0430 00:11:50.990233 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.990396 kubelet[2531]: E0430 00:11:50.990365 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.990692 kubelet[2531]: E0430 00:11:50.990614 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.990692 kubelet[2531]: W0430 00:11:50.990629 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.990855 kubelet[2531]: E0430 00:11:50.990787 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.990961 kubelet[2531]: E0430 00:11:50.990951 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.991075 kubelet[2531]: W0430 00:11:50.991016 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.991215 kubelet[2531]: E0430 00:11:50.991143 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.992148 kubelet[2531]: E0430 00:11:50.992136 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.992268 kubelet[2531]: W0430 00:11:50.992222 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.992425 kubelet[2531]: E0430 00:11:50.992331 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.992559 kubelet[2531]: E0430 00:11:50.992549 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.992759 kubelet[2531]: W0430 00:11:50.992592 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.992832 kubelet[2531]: E0430 00:11:50.992818 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.992967 kubelet[2531]: E0430 00:11:50.992937 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.992967 kubelet[2531]: W0430 00:11:50.992946 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.993090 kubelet[2531]: E0430 00:11:50.993030 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.993406 kubelet[2531]: E0430 00:11:50.993350 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.993406 kubelet[2531]: W0430 00:11:50.993360 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.993487 kubelet[2531]: E0430 00:11:50.993430 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.993929 kubelet[2531]: E0430 00:11:50.993717 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.993929 kubelet[2531]: W0430 00:11:50.993728 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.993929 kubelet[2531]: E0430 00:11:50.993780 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.994190 kubelet[2531]: E0430 00:11:50.994088 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.994190 kubelet[2531]: W0430 00:11:50.994098 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.994305 kubelet[2531]: E0430 00:11:50.994283 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.994502 kubelet[2531]: E0430 00:11:50.994396 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.994502 kubelet[2531]: W0430 00:11:50.994419 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.995317 kubelet[2531]: E0430 00:11:50.994631 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.995931 kubelet[2531]: E0430 00:11:50.995439 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.995931 kubelet[2531]: W0430 00:11:50.995459 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.996241 kubelet[2531]: E0430 00:11:50.996073 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.996407 kubelet[2531]: E0430 00:11:50.996360 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.996407 kubelet[2531]: W0430 00:11:50.996371 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.996526 kubelet[2531]: E0430 00:11:50.996497 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.996885 kubelet[2531]: E0430 00:11:50.996795 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.996885 kubelet[2531]: W0430 00:11:50.996807 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.996885 kubelet[2531]: E0430 00:11:50.996849 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.997119 kubelet[2531]: E0430 00:11:50.997086 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.997119 kubelet[2531]: W0430 00:11:50.997097 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.997314 kubelet[2531]: E0430 00:11:50.997209 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.998566 kubelet[2531]: E0430 00:11:50.998395 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.998566 kubelet[2531]: W0430 00:11:50.998410 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.998566 kubelet[2531]: E0430 00:11:50.998448 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:50.999820 kubelet[2531]: E0430 00:11:50.999458 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:50.999820 kubelet[2531]: W0430 00:11:50.999473 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:50.999820 kubelet[2531]: E0430 00:11:50.999525 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:51.001200 kubelet[2531]: E0430 00:11:51.001084 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:51.001200 kubelet[2531]: W0430 00:11:51.001100 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:51.001371 kubelet[2531]: E0430 00:11:51.001359 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:51.001479 kubelet[2531]: W0430 00:11:51.001415 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:51.001479 kubelet[2531]: E0430 00:11:51.001436 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:51.001479 kubelet[2531]: E0430 00:11:51.001461 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:51.008096 systemd[1]: Started cri-containerd-561a78288863df7327d97e18a606b6f687f3ab41a08ee77f2d50832a36f2c593.scope - libcontainer container 561a78288863df7327d97e18a606b6f687f3ab41a08ee77f2d50832a36f2c593. Apr 30 00:11:51.011065 kubelet[2531]: E0430 00:11:51.009778 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:51.011065 kubelet[2531]: W0430 00:11:51.009796 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:51.011065 kubelet[2531]: E0430 00:11:51.009813 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:51.020514 kubelet[2531]: E0430 00:11:51.020492 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:51.023781 kubelet[2531]: E0430 00:11:51.023653 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:51.023781 kubelet[2531]: W0430 00:11:51.023672 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:51.023781 kubelet[2531]: E0430 00:11:51.023716 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:51.029188 containerd[1452]: time="2025-04-30T00:11:51.028885522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l625z,Uid:0149cbe4-2854-4414-9781-98b071bef220,Namespace:calico-system,Attempt:0,}" Apr 30 00:11:51.055824 containerd[1452]: time="2025-04-30T00:11:51.055789836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68dcfbd854-4dr2b,Uid:38d4edf8-b112-4068-affa-7280fc68d89b,Namespace:calico-system,Attempt:0,} returns sandbox id \"561a78288863df7327d97e18a606b6f687f3ab41a08ee77f2d50832a36f2c593\"" Apr 30 00:11:51.060862 kubelet[2531]: E0430 00:11:51.060823 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:51.061536 containerd[1452]: time="2025-04-30T00:11:51.061159232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:11:51.061536 containerd[1452]: time="2025-04-30T00:11:51.061222570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:11:51.061536 containerd[1452]: time="2025-04-30T00:11:51.061236894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:51.061536 containerd[1452]: time="2025-04-30T00:11:51.061324119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:11:51.067511 containerd[1452]: time="2025-04-30T00:11:51.067481937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:11:51.091930 systemd[1]: Started cri-containerd-73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222.scope - libcontainer container 73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222. Apr 30 00:11:51.115354 containerd[1452]: time="2025-04-30T00:11:51.115321081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l625z,Uid:0149cbe4-2854-4414-9781-98b071bef220,Namespace:calico-system,Attempt:0,} returns sandbox id \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\"" Apr 30 00:11:51.116334 kubelet[2531]: E0430 00:11:51.116308 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:52.280659 kubelet[2531]: E0430 00:11:52.280583 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:11:52.586978 containerd[1452]: time="2025-04-30T00:11:52.586823282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:52.587712 containerd[1452]: time="2025-04-30T00:11:52.587585643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:11:52.588474 containerd[1452]: time="2025-04-30T00:11:52.588412182Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:52.590696 containerd[1452]: time="2025-04-30T00:11:52.590638371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:52.591513 containerd[1452]: time="2025-04-30T00:11:52.591468351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.523803282s" Apr 30 00:11:52.591513 containerd[1452]: time="2025-04-30T00:11:52.591505121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:11:52.592768 containerd[1452]: time="2025-04-30T00:11:52.592744048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:11:52.614088 containerd[1452]: time="2025-04-30T00:11:52.613729482Z" level=info msg="CreateContainer within sandbox \"561a78288863df7327d97e18a606b6f687f3ab41a08ee77f2d50832a36f2c593\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:11:52.628810 containerd[1452]: time="2025-04-30T00:11:52.628763540Z" level=info msg="CreateContainer within sandbox \"561a78288863df7327d97e18a606b6f687f3ab41a08ee77f2d50832a36f2c593\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4cd18f9b213b3324881afaf1b58fcd5bd1b627294c9abe6a7f7a55cf45a9a88\"" Apr 30 00:11:52.629711 containerd[1452]: time="2025-04-30T00:11:52.629660538Z" level=info msg="StartContainer for \"e4cd18f9b213b3324881afaf1b58fcd5bd1b627294c9abe6a7f7a55cf45a9a88\"" Apr 30 00:11:52.655157 systemd[1]: Started cri-containerd-e4cd18f9b213b3324881afaf1b58fcd5bd1b627294c9abe6a7f7a55cf45a9a88.scope - libcontainer container e4cd18f9b213b3324881afaf1b58fcd5bd1b627294c9abe6a7f7a55cf45a9a88. Apr 30 00:11:52.715088 containerd[1452]: time="2025-04-30T00:11:52.715028249Z" level=info msg="StartContainer for \"e4cd18f9b213b3324881afaf1b58fcd5bd1b627294c9abe6a7f7a55cf45a9a88\" returns successfully" Apr 30 00:11:53.336568 kubelet[2531]: E0430 00:11:53.336518 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:53.368978 kubelet[2531]: E0430 00:11:53.368946 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.369256 kubelet[2531]: W0430 00:11:53.369025 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.369256 kubelet[2531]: E0430 00:11:53.369050 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.369635 kubelet[2531]: E0430 00:11:53.369601 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.369805 kubelet[2531]: W0430 00:11:53.369715 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.369805 kubelet[2531]: E0430 00:11:53.369733 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.370075 kubelet[2531]: E0430 00:11:53.370061 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.370242 kubelet[2531]: W0430 00:11:53.370155 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.370242 kubelet[2531]: E0430 00:11:53.370170 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.370547 kubelet[2531]: I0430 00:11:53.370422 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68dcfbd854-4dr2b" podStartSLOduration=1.8450020980000001 podStartE2EDuration="3.370408533s" podCreationTimestamp="2025-04-30 00:11:50 +0000 UTC" firstStartedPulling="2025-04-30 00:11:51.067198657 +0000 UTC m=+15.875602934" lastFinishedPulling="2025-04-30 00:11:52.592605092 +0000 UTC m=+17.401009369" observedRunningTime="2025-04-30 00:11:53.370286343 +0000 UTC m=+18.178690660" watchObservedRunningTime="2025-04-30 00:11:53.370408533 +0000 UTC m=+18.178812810" Apr 30 00:11:53.370960 kubelet[2531]: E0430 00:11:53.370875 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.371111 kubelet[2531]: W0430 00:11:53.370890 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.371111 kubelet[2531]: E0430 00:11:53.371044 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.371618 kubelet[2531]: E0430 00:11:53.371496 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.371618 kubelet[2531]: W0430 00:11:53.371510 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.371618 kubelet[2531]: E0430 00:11:53.371530 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.372036 kubelet[2531]: E0430 00:11:53.371956 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.372036 kubelet[2531]: W0430 00:11:53.371971 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.372036 kubelet[2531]: E0430 00:11:53.371982 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.372466 kubelet[2531]: E0430 00:11:53.372451 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.372555 kubelet[2531]: W0430 00:11:53.372543 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.372710 kubelet[2531]: E0430 00:11:53.372694 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.373069 kubelet[2531]: E0430 00:11:53.373054 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.373173 kubelet[2531]: W0430 00:11:53.373158 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.373493 kubelet[2531]: E0430 00:11:53.373327 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.373859 kubelet[2531]: E0430 00:11:53.373837 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.374017 kubelet[2531]: W0430 00:11:53.374000 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.374274 kubelet[2531]: E0430 00:11:53.374143 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.374724 kubelet[2531]: E0430 00:11:53.374653 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.374878 kubelet[2531]: W0430 00:11:53.374810 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.374878 kubelet[2531]: E0430 00:11:53.374830 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.375255 kubelet[2531]: E0430 00:11:53.375238 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.375435 kubelet[2531]: W0430 00:11:53.375323 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.375435 kubelet[2531]: E0430 00:11:53.375341 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.375777 kubelet[2531]: E0430 00:11:53.375763 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.376043 kubelet[2531]: W0430 00:11:53.375906 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.376043 kubelet[2531]: E0430 00:11:53.375932 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.376267 kubelet[2531]: E0430 00:11:53.376253 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.376339 kubelet[2531]: W0430 00:11:53.376327 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.376392 kubelet[2531]: E0430 00:11:53.376383 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.376860 kubelet[2531]: E0430 00:11:53.376782 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.376860 kubelet[2531]: W0430 00:11:53.376796 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.376860 kubelet[2531]: E0430 00:11:53.376807 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.377281 kubelet[2531]: E0430 00:11:53.377251 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.377443 kubelet[2531]: W0430 00:11:53.377365 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.377502 kubelet[2531]: E0430 00:11:53.377382 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.408462 kubelet[2531]: E0430 00:11:53.408434 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.408751 kubelet[2531]: W0430 00:11:53.408613 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.408751 kubelet[2531]: E0430 00:11:53.408639 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.409186 kubelet[2531]: E0430 00:11:53.409096 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.409186 kubelet[2531]: W0430 00:11:53.409112 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.409186 kubelet[2531]: E0430 00:11:53.409135 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.409386 kubelet[2531]: E0430 00:11:53.409344 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.409386 kubelet[2531]: W0430 00:11:53.409363 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.409386 kubelet[2531]: E0430 00:11:53.409381 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.409561 kubelet[2531]: E0430 00:11:53.409542 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.409561 kubelet[2531]: W0430 00:11:53.409554 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.409640 kubelet[2531]: E0430 00:11:53.409563 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.409815 kubelet[2531]: E0430 00:11:53.409801 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.409815 kubelet[2531]: W0430 00:11:53.409813 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.409971 kubelet[2531]: E0430 00:11:53.409835 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.410192 kubelet[2531]: E0430 00:11:53.410059 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.410192 kubelet[2531]: W0430 00:11:53.410076 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.410192 kubelet[2531]: E0430 00:11:53.410092 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.410442 kubelet[2531]: E0430 00:11:53.410412 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.410514 kubelet[2531]: W0430 00:11:53.410501 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.410644 kubelet[2531]: E0430 00:11:53.410583 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.410976 kubelet[2531]: E0430 00:11:53.410960 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.411048 kubelet[2531]: W0430 00:11:53.411036 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.411184 kubelet[2531]: E0430 00:11:53.411094 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.411380 kubelet[2531]: E0430 00:11:53.411318 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.411459 kubelet[2531]: W0430 00:11:53.411447 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.411595 kubelet[2531]: E0430 00:11:53.411574 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.411829 kubelet[2531]: E0430 00:11:53.411814 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.411901 kubelet[2531]: W0430 00:11:53.411889 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.411955 kubelet[2531]: E0430 00:11:53.411946 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.412242 kubelet[2531]: E0430 00:11:53.412224 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.412429 kubelet[2531]: W0430 00:11:53.412311 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.412429 kubelet[2531]: E0430 00:11:53.412334 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.412579 kubelet[2531]: E0430 00:11:53.412566 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.412640 kubelet[2531]: W0430 00:11:53.412629 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.412786 kubelet[2531]: E0430 00:11:53.412766 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.413056 kubelet[2531]: E0430 00:11:53.413042 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.413056 kubelet[2531]: W0430 00:11:53.413055 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.413190 kubelet[2531]: E0430 00:11:53.413080 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.413256 kubelet[2531]: E0430 00:11:53.413231 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.413256 kubelet[2531]: W0430 00:11:53.413240 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.413303 kubelet[2531]: E0430 00:11:53.413257 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.413522 kubelet[2531]: E0430 00:11:53.413510 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.413522 kubelet[2531]: W0430 00:11:53.413522 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.413641 kubelet[2531]: E0430 00:11:53.413535 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.413787 kubelet[2531]: E0430 00:11:53.413773 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.413828 kubelet[2531]: W0430 00:11:53.413787 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.413828 kubelet[2531]: E0430 00:11:53.413804 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.414031 kubelet[2531]: E0430 00:11:53.414017 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.414031 kubelet[2531]: W0430 00:11:53.414031 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.414105 kubelet[2531]: E0430 00:11:53.414055 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:53.420740 kubelet[2531]: E0430 00:11:53.420714 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:11:53.420740 kubelet[2531]: W0430 00:11:53.420735 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:11:53.420847 kubelet[2531]: E0430 00:11:53.420753 2531 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:11:54.131665 containerd[1452]: time="2025-04-30T00:11:54.131276375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:54.132108 containerd[1452]: time="2025-04-30T00:11:54.132056957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:11:54.133334 containerd[1452]: time="2025-04-30T00:11:54.133287563Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:54.135500 containerd[1452]: time="2025-04-30T00:11:54.135467390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:54.136690 containerd[1452]: time="2025-04-30T00:11:54.136654306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.54387801s" Apr 30 00:11:54.136735 containerd[1452]: time="2025-04-30T00:11:54.136699197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:11:54.138704 containerd[1452]: time="2025-04-30T00:11:54.138593517Z" level=info msg="CreateContainer within sandbox \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:11:54.155265 containerd[1452]: time="2025-04-30T00:11:54.155220905Z" level=info msg="CreateContainer within sandbox \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5\"" Apr 30 00:11:54.155750 containerd[1452]: time="2025-04-30T00:11:54.155728583Z" level=info msg="StartContainer for \"78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5\"" Apr 30 00:11:54.189861 systemd[1]: Started cri-containerd-78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5.scope - libcontainer container 78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5. Apr 30 00:11:54.226786 containerd[1452]: time="2025-04-30T00:11:54.226501645Z" level=info msg="StartContainer for \"78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5\" returns successfully" Apr 30 00:11:54.270835 kubelet[2531]: E0430 00:11:54.270784 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:11:54.279585 systemd[1]: cri-containerd-78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5.scope: Deactivated successfully. Apr 30 00:11:54.304529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5-rootfs.mount: Deactivated successfully. Apr 30 00:11:54.340168 kubelet[2531]: I0430 00:11:54.340132 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:11:54.340655 kubelet[2531]: E0430 00:11:54.340461 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:54.341106 kubelet[2531]: E0430 00:11:54.341035 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:54.414115 containerd[1452]: time="2025-04-30T00:11:54.407812297Z" level=info msg="shim disconnected" id=78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5 namespace=k8s.io Apr 30 00:11:54.416963 containerd[1452]: time="2025-04-30T00:11:54.416762979Z" level=warning msg="cleaning up after shim disconnected" id=78cb79c3b6cd06ee54ac07680e6d9584700163330a6afb2cae836f49eafb60a5 namespace=k8s.io Apr 30 00:11:54.416963 containerd[1452]: time="2025-04-30T00:11:54.416797947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:11:55.343142 kubelet[2531]: E0430 00:11:55.342757 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:55.344737 containerd[1452]: time="2025-04-30T00:11:55.344701575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:11:56.270254 kubelet[2531]: E0430 00:11:56.270210 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:11:58.270826 kubelet[2531]: E0430 00:11:58.270764 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:11:58.808251 containerd[1452]: time="2025-04-30T00:11:58.808185625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:58.808876 containerd[1452]: time="2025-04-30T00:11:58.808835541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:11:58.809415 containerd[1452]: time="2025-04-30T00:11:58.809378359Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:58.811868 containerd[1452]: time="2025-04-30T00:11:58.811828919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:11:58.812720 containerd[1452]: time="2025-04-30T00:11:58.812685273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.46753232s" Apr 30 00:11:58.812720 containerd[1452]: time="2025-04-30T00:11:58.812724200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:11:58.815310 containerd[1452]: time="2025-04-30T00:11:58.815257855Z" level=info msg="CreateContainer within sandbox \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:11:58.829554 containerd[1452]: time="2025-04-30T00:11:58.829500375Z" level=info msg="CreateContainer within sandbox \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5\"" Apr 30 00:11:58.830114 containerd[1452]: time="2025-04-30T00:11:58.830077398Z" level=info msg="StartContainer for \"2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5\"" Apr 30 00:11:58.860955 systemd[1]: Started cri-containerd-2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5.scope - libcontainer container 2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5. Apr 30 00:11:58.892328 containerd[1452]: time="2025-04-30T00:11:58.892273614Z" level=info msg="StartContainer for \"2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5\" returns successfully" Apr 30 00:11:59.354517 kubelet[2531]: E0430 00:11:59.354463 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:59.551947 systemd[1]: cri-containerd-2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5.scope: Deactivated successfully. Apr 30 00:11:59.573085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5-rootfs.mount: Deactivated successfully. Apr 30 00:11:59.578734 kubelet[2531]: I0430 00:11:59.575857 2531 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 00:11:59.617779 systemd[1]: Created slice kubepods-burstable-podf10de97b_7bb8_4314_ad06_307db947bbd3.slice - libcontainer container kubepods-burstable-podf10de97b_7bb8_4314_ad06_307db947bbd3.slice. Apr 30 00:11:59.622803 systemd[1]: Created slice kubepods-burstable-pod18e58184_40ff_4d1f_8487_cc0971208414.slice - libcontainer container kubepods-burstable-pod18e58184_40ff_4d1f_8487_cc0971208414.slice. Apr 30 00:11:59.627222 systemd[1]: Created slice kubepods-besteffort-podec608e6a_3467_4db3_99e7_9ccb6cf924be.slice - libcontainer container kubepods-besteffort-podec608e6a_3467_4db3_99e7_9ccb6cf924be.slice. Apr 30 00:11:59.649168 containerd[1452]: time="2025-04-30T00:11:59.649081932Z" level=info msg="shim disconnected" id=2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5 namespace=k8s.io Apr 30 00:11:59.649168 containerd[1452]: time="2025-04-30T00:11:59.649160506Z" level=warning msg="cleaning up after shim disconnected" id=2bca9eea9e3392fd7bf897813be7692254eafb13c3f9b61e3ee0f58188572da5 namespace=k8s.io Apr 30 00:11:59.649168 containerd[1452]: time="2025-04-30T00:11:59.649171708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:11:59.666161 kubelet[2531]: I0430 00:11:59.665771 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78f4\" (UniqueName: \"kubernetes.io/projected/d7f443d6-1623-481d-ac5b-c37dd6fd2c49-kube-api-access-d78f4\") pod \"calico-apiserver-bd89fb76-dn8cn\" (UID: \"d7f443d6-1623-481d-ac5b-c37dd6fd2c49\") " pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:11:59.666161 kubelet[2531]: I0430 00:11:59.665834 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10de97b-7bb8-4314-ad06-307db947bbd3-config-volume\") pod \"coredns-6f6b679f8f-sczzd\" (UID: \"f10de97b-7bb8-4314-ad06-307db947bbd3\") " pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:11:59.666161 kubelet[2531]: I0430 00:11:59.665862 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec608e6a-3467-4db3-99e7-9ccb6cf924be-tigera-ca-bundle\") pod \"calico-kube-controllers-bd4f6bb7c-nl4l2\" (UID: \"ec608e6a-3467-4db3-99e7-9ccb6cf924be\") " pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:11:59.666161 kubelet[2531]: I0430 00:11:59.665883 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18e58184-40ff-4d1f-8487-cc0971208414-config-volume\") pod \"coredns-6f6b679f8f-fmpt9\" (UID: \"18e58184-40ff-4d1f-8487-cc0971208414\") " pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:11:59.666161 kubelet[2531]: I0430 00:11:59.665905 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sffr\" (UniqueName: \"kubernetes.io/projected/18e58184-40ff-4d1f-8487-cc0971208414-kube-api-access-7sffr\") pod \"coredns-6f6b679f8f-fmpt9\" (UID: \"18e58184-40ff-4d1f-8487-cc0971208414\") " pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:11:59.666640 kubelet[2531]: I0430 00:11:59.665930 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msw2c\" (UniqueName: \"kubernetes.io/projected/ec608e6a-3467-4db3-99e7-9ccb6cf924be-kube-api-access-msw2c\") pod \"calico-kube-controllers-bd4f6bb7c-nl4l2\" (UID: \"ec608e6a-3467-4db3-99e7-9ccb6cf924be\") " pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:11:59.666640 kubelet[2531]: I0430 00:11:59.665957 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d7f443d6-1623-481d-ac5b-c37dd6fd2c49-calico-apiserver-certs\") pod \"calico-apiserver-bd89fb76-dn8cn\" (UID: \"d7f443d6-1623-481d-ac5b-c37dd6fd2c49\") " pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:11:59.666640 kubelet[2531]: I0430 00:11:59.665981 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mn2z\" (UniqueName: \"kubernetes.io/projected/f10de97b-7bb8-4314-ad06-307db947bbd3-kube-api-access-5mn2z\") pod \"coredns-6f6b679f8f-sczzd\" (UID: \"f10de97b-7bb8-4314-ad06-307db947bbd3\") " pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:11:59.668113 kubelet[2531]: I0430 00:11:59.666985 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ssv\" (UniqueName: \"kubernetes.io/projected/045c28bd-5788-44ed-ad00-a7c01b791cf2-kube-api-access-j4ssv\") pod \"calico-apiserver-bd89fb76-spgc5\" (UID: \"045c28bd-5788-44ed-ad00-a7c01b791cf2\") " pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:11:59.674046 kubelet[2531]: I0430 00:11:59.673092 2531 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/045c28bd-5788-44ed-ad00-a7c01b791cf2-calico-apiserver-certs\") pod \"calico-apiserver-bd89fb76-spgc5\" (UID: \"045c28bd-5788-44ed-ad00-a7c01b791cf2\") " pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:11:59.679183 systemd[1]: Created slice kubepods-besteffort-podd7f443d6_1623_481d_ac5b_c37dd6fd2c49.slice - libcontainer container kubepods-besteffort-podd7f443d6_1623_481d_ac5b_c37dd6fd2c49.slice. Apr 30 00:11:59.689470 systemd[1]: Created slice kubepods-besteffort-pod045c28bd_5788_44ed_ad00_a7c01b791cf2.slice - libcontainer container kubepods-besteffort-pod045c28bd_5788_44ed_ad00_a7c01b791cf2.slice. Apr 30 00:11:59.920975 kubelet[2531]: E0430 00:11:59.920857 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:59.921907 containerd[1452]: time="2025-04-30T00:11:59.921869846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:0,}" Apr 30 00:11:59.924972 kubelet[2531]: E0430 00:11:59.924944 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:11:59.926178 containerd[1452]: time="2025-04-30T00:11:59.926141685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:0,}" Apr 30 00:11:59.930313 containerd[1452]: time="2025-04-30T00:11:59.930270141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:0,}" Apr 30 00:11:59.985604 containerd[1452]: time="2025-04-30T00:11:59.985490723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:11:59.999470 containerd[1452]: time="2025-04-30T00:11:59.999347217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:12:00.309503 systemd[1]: Created slice kubepods-besteffort-podc26bbdc1_7849_41b7_afe2_12b01cd7b775.slice - libcontainer container kubepods-besteffort-podc26bbdc1_7849_41b7_afe2_12b01cd7b775.slice. Apr 30 00:12:00.319244 containerd[1452]: time="2025-04-30T00:12:00.312415914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:0,}" Apr 30 00:12:00.361229 kubelet[2531]: E0430 00:12:00.359253 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:00.362882 containerd[1452]: time="2025-04-30T00:12:00.362835357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:12:00.403715 containerd[1452]: time="2025-04-30T00:12:00.403651723Z" level=error msg="Failed to destroy network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.405994 containerd[1452]: time="2025-04-30T00:12:00.405952727Z" level=error msg="encountered an error cleaning up failed sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.406100 containerd[1452]: time="2025-04-30T00:12:00.406027979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.411237 kubelet[2531]: E0430 00:12:00.410766 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.411237 kubelet[2531]: E0430 00:12:00.410865 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:00.411237 kubelet[2531]: E0430 00:12:00.410885 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:00.411397 kubelet[2531]: E0430 00:12:00.410940 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" podUID="d7f443d6-1623-481d-ac5b-c37dd6fd2c49" Apr 30 00:12:00.414968 containerd[1452]: time="2025-04-30T00:12:00.414916062Z" level=error msg="Failed to destroy network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.415519 containerd[1452]: time="2025-04-30T00:12:00.415468709Z" level=error msg="encountered an error cleaning up failed sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.415566 containerd[1452]: time="2025-04-30T00:12:00.415545322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.416615 kubelet[2531]: E0430 00:12:00.416566 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.416744 kubelet[2531]: E0430 00:12:00.416638 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:00.416744 kubelet[2531]: E0430 00:12:00.416660 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:00.416744 kubelet[2531]: E0430 00:12:00.416721 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" podUID="045c28bd-5788-44ed-ad00-a7c01b791cf2" Apr 30 00:12:00.416845 containerd[1452]: time="2025-04-30T00:12:00.416604129Z" level=error msg="Failed to destroy network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.417127 containerd[1452]: time="2025-04-30T00:12:00.416998831Z" level=error msg="encountered an error cleaning up failed sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.417127 containerd[1452]: time="2025-04-30T00:12:00.417068402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.417339 containerd[1452]: time="2025-04-30T00:12:00.417306000Z" level=error msg="Failed to destroy network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.417378 kubelet[2531]: E0430 00:12:00.417355 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.417414 kubelet[2531]: E0430 00:12:00.417389 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:00.417414 kubelet[2531]: E0430 00:12:00.417403 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:00.417466 kubelet[2531]: E0430 00:12:00.417435 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" podUID="ec608e6a-3467-4db3-99e7-9ccb6cf924be" Apr 30 00:12:00.417951 containerd[1452]: time="2025-04-30T00:12:00.417820401Z" level=error msg="encountered an error cleaning up failed sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.417951 containerd[1452]: time="2025-04-30T00:12:00.417869769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.418388 kubelet[2531]: E0430 00:12:00.418353 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.418895 kubelet[2531]: E0430 00:12:00.418820 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:00.418895 kubelet[2531]: E0430 00:12:00.418856 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:00.419013 kubelet[2531]: E0430 00:12:00.418891 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fmpt9" podUID="18e58184-40ff-4d1f-8487-cc0971208414" Apr 30 00:12:00.419453 containerd[1452]: time="2025-04-30T00:12:00.419220982Z" level=error msg="Failed to destroy network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.419558 containerd[1452]: time="2025-04-30T00:12:00.419507987Z" level=error msg="encountered an error cleaning up failed sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.419603 containerd[1452]: time="2025-04-30T00:12:00.419552514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.419780 kubelet[2531]: E0430 00:12:00.419754 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.419833 kubelet[2531]: E0430 00:12:00.419789 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:00.419833 kubelet[2531]: E0430 00:12:00.419808 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:00.419884 kubelet[2531]: E0430 00:12:00.419839 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sczzd" podUID="f10de97b-7bb8-4314-ad06-307db947bbd3" Apr 30 00:12:00.439778 containerd[1452]: time="2025-04-30T00:12:00.439725900Z" level=error msg="Failed to destroy network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.440122 containerd[1452]: time="2025-04-30T00:12:00.440097959Z" level=error msg="encountered an error cleaning up failed sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.440190 containerd[1452]: time="2025-04-30T00:12:00.440170091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.440805 kubelet[2531]: E0430 00:12:00.440421 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:00.440805 kubelet[2531]: E0430 00:12:00.440485 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:00.440805 kubelet[2531]: E0430 00:12:00.440505 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:00.440927 kubelet[2531]: E0430 00:12:00.440557 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:12:00.826102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f-shm.mount: Deactivated successfully. Apr 30 00:12:00.826200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58-shm.mount: Deactivated successfully. Apr 30 00:12:00.826249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705-shm.mount: Deactivated successfully. Apr 30 00:12:01.369179 kubelet[2531]: I0430 00:12:01.368728 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8" Apr 30 00:12:01.370935 kubelet[2531]: I0430 00:12:01.370592 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f" Apr 30 00:12:01.370975 containerd[1452]: time="2025-04-30T00:12:01.369428575Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:01.371212 containerd[1452]: time="2025-04-30T00:12:01.371177514Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:01.371550 containerd[1452]: time="2025-04-30T00:12:01.371516204Z" level=info msg="Ensure that sandbox 7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f in task-service has been cleanup successfully" Apr 30 00:12:01.371766 containerd[1452]: time="2025-04-30T00:12:01.371638823Z" level=info msg="Ensure that sandbox fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8 in task-service has been cleanup successfully" Apr 30 00:12:01.374158 containerd[1452]: time="2025-04-30T00:12:01.374110389Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:01.374257 containerd[1452]: time="2025-04-30T00:12:01.374198042Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:01.374377 containerd[1452]: time="2025-04-30T00:12:01.374141793Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:01.374377 containerd[1452]: time="2025-04-30T00:12:01.374327741Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:01.375731 containerd[1452]: time="2025-04-30T00:12:01.375090694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:12:01.375142 systemd[1]: run-netns-cni\x2dfa6df342\x2d6847\x2d6239\x2d9554\x2dcf6cee269a36.mount: Deactivated successfully. Apr 30 00:12:01.376110 containerd[1452]: time="2025-04-30T00:12:01.375871289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:1,}" Apr 30 00:12:01.377004 kubelet[2531]: I0430 00:12:01.376219 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58" Apr 30 00:12:01.379152 systemd[1]: run-netns-cni\x2dcfac2e67\x2d9305\x2de10a\x2d9004\x2d73ca6e6fb080.mount: Deactivated successfully. Apr 30 00:12:01.381323 containerd[1452]: time="2025-04-30T00:12:01.380622113Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:01.381323 containerd[1452]: time="2025-04-30T00:12:01.380839745Z" level=info msg="Ensure that sandbox 52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58 in task-service has been cleanup successfully" Apr 30 00:12:01.382018 kubelet[2531]: I0430 00:12:01.381857 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727" Apr 30 00:12:01.383774 containerd[1452]: time="2025-04-30T00:12:01.383583711Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:01.383774 containerd[1452]: time="2025-04-30T00:12:01.383636199Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:01.383868 containerd[1452]: time="2025-04-30T00:12:01.383816906Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:01.384123 containerd[1452]: time="2025-04-30T00:12:01.383974809Z" level=info msg="Ensure that sandbox 29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727 in task-service has been cleanup successfully" Apr 30 00:12:01.384168 kubelet[2531]: E0430 00:12:01.383860 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:01.384700 containerd[1452]: time="2025-04-30T00:12:01.384489925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:1,}" Apr 30 00:12:01.384767 systemd[1]: run-netns-cni\x2d9a0b7039\x2d18a1\x2dfd59\x2d718a\x2d65e737f0e242.mount: Deactivated successfully. Apr 30 00:12:01.385049 containerd[1452]: time="2025-04-30T00:12:01.385021964Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:01.385049 containerd[1452]: time="2025-04-30T00:12:01.385044127Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:01.385981 kubelet[2531]: I0430 00:12:01.385932 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc" Apr 30 00:12:01.387027 containerd[1452]: time="2025-04-30T00:12:01.386960731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:1,}" Apr 30 00:12:01.387458 containerd[1452]: time="2025-04-30T00:12:01.386969933Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:01.387458 containerd[1452]: time="2025-04-30T00:12:01.387329746Z" level=info msg="Ensure that sandbox 574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc in task-service has been cleanup successfully" Apr 30 00:12:01.387790 containerd[1452]: time="2025-04-30T00:12:01.387649633Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:01.387790 containerd[1452]: time="2025-04-30T00:12:01.387671396Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:01.389037 systemd[1]: run-netns-cni\x2d046758a0\x2d7b9b\x2d079e\x2de392\x2d36f4fa9dc7d4.mount: Deactivated successfully. Apr 30 00:12:01.389632 containerd[1452]: time="2025-04-30T00:12:01.389494986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:12:01.391052 kubelet[2531]: I0430 00:12:01.390430 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705" Apr 30 00:12:01.391540 containerd[1452]: time="2025-04-30T00:12:01.391501604Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:01.391819 containerd[1452]: time="2025-04-30T00:12:01.391792207Z" level=info msg="Ensure that sandbox a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705 in task-service has been cleanup successfully" Apr 30 00:12:01.392500 containerd[1452]: time="2025-04-30T00:12:01.392417139Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:01.392500 containerd[1452]: time="2025-04-30T00:12:01.392464226Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:01.393115 kubelet[2531]: E0430 00:12:01.392927 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:01.393297 containerd[1452]: time="2025-04-30T00:12:01.393265145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:1,}" Apr 30 00:12:01.566257 containerd[1452]: time="2025-04-30T00:12:01.565702636Z" level=error msg="Failed to destroy network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.566387 containerd[1452]: time="2025-04-30T00:12:01.566270960Z" level=error msg="encountered an error cleaning up failed sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.566387 containerd[1452]: time="2025-04-30T00:12:01.566340130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.566596 kubelet[2531]: E0430 00:12:01.566556 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.566673 kubelet[2531]: E0430 00:12:01.566619 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:01.566673 kubelet[2531]: E0430 00:12:01.566640 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:01.566813 kubelet[2531]: E0430 00:12:01.566729 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" podUID="ec608e6a-3467-4db3-99e7-9ccb6cf924be" Apr 30 00:12:01.586623 containerd[1452]: time="2025-04-30T00:12:01.586569886Z" level=error msg="Failed to destroy network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.586931 containerd[1452]: time="2025-04-30T00:12:01.586884812Z" level=error msg="Failed to destroy network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.587105 containerd[1452]: time="2025-04-30T00:12:01.587072120Z" level=error msg="encountered an error cleaning up failed sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.587160 containerd[1452]: time="2025-04-30T00:12:01.587138170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.587262 containerd[1452]: time="2025-04-30T00:12:01.587229143Z" level=error msg="encountered an error cleaning up failed sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.587334 containerd[1452]: time="2025-04-30T00:12:01.587286712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.588079 kubelet[2531]: E0430 00:12:01.587496 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.588079 kubelet[2531]: E0430 00:12:01.587549 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:01.588079 kubelet[2531]: E0430 00:12:01.587570 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:01.588197 kubelet[2531]: E0430 00:12:01.587607 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" podUID="045c28bd-5788-44ed-ad00-a7c01b791cf2" Apr 30 00:12:01.588778 kubelet[2531]: E0430 00:12:01.588729 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.588857 kubelet[2531]: E0430 00:12:01.588787 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:01.588857 kubelet[2531]: E0430 00:12:01.588806 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:01.588906 kubelet[2531]: E0430 00:12:01.588849 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" podUID="d7f443d6-1623-481d-ac5b-c37dd6fd2c49" Apr 30 00:12:01.600866 containerd[1452]: time="2025-04-30T00:12:01.600816515Z" level=error msg="Failed to destroy network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.601181 containerd[1452]: time="2025-04-30T00:12:01.601158086Z" level=error msg="encountered an error cleaning up failed sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.601236 containerd[1452]: time="2025-04-30T00:12:01.601217174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.601448 kubelet[2531]: E0430 00:12:01.601410 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.601686 kubelet[2531]: E0430 00:12:01.601547 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:01.601686 kubelet[2531]: E0430 00:12:01.601574 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:01.601686 kubelet[2531]: E0430 00:12:01.601635 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fmpt9" podUID="18e58184-40ff-4d1f-8487-cc0971208414" Apr 30 00:12:01.604999 containerd[1452]: time="2025-04-30T00:12:01.604841351Z" level=error msg="Failed to destroy network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605397 containerd[1452]: time="2025-04-30T00:12:01.605363668Z" level=error msg="Failed to destroy network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605466 containerd[1452]: time="2025-04-30T00:12:01.605371749Z" level=error msg="encountered an error cleaning up failed sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605513 containerd[1452]: time="2025-04-30T00:12:01.605499808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605673 containerd[1452]: time="2025-04-30T00:12:01.605644310Z" level=error msg="encountered an error cleaning up failed sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605876 containerd[1452]: time="2025-04-30T00:12:01.605719841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605955 kubelet[2531]: E0430 00:12:01.605725 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605955 kubelet[2531]: E0430 00:12:01.605876 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:01.605955 kubelet[2531]: E0430 00:12:01.605920 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:01.606253 kubelet[2531]: E0430 00:12:01.605939 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:01.606253 kubelet[2531]: E0430 00:12:01.606042 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sczzd" podUID="f10de97b-7bb8-4314-ad06-307db947bbd3" Apr 30 00:12:01.609571 kubelet[2531]: E0430 00:12:01.605787 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:01.609655 kubelet[2531]: E0430 00:12:01.609581 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:01.609655 kubelet[2531]: E0430 00:12:01.609627 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:12:01.826578 systemd[1]: run-netns-cni\x2d54bca7b4\x2d22b5\x2d150a\x2d2058\x2ddd218aefee73.mount: Deactivated successfully. Apr 30 00:12:01.826667 systemd[1]: run-netns-cni\x2d4e1ee163\x2d5f95\x2d8ed2\x2d0f89\x2d74d517dff630.mount: Deactivated successfully. Apr 30 00:12:02.394296 kubelet[2531]: I0430 00:12:02.394264 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed" Apr 30 00:12:02.396261 containerd[1452]: time="2025-04-30T00:12:02.394968752Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:02.396261 containerd[1452]: time="2025-04-30T00:12:02.395129174Z" level=info msg="Ensure that sandbox 872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed in task-service has been cleanup successfully" Apr 30 00:12:02.396261 containerd[1452]: time="2025-04-30T00:12:02.396022218Z" level=info msg="TearDown network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" successfully" Apr 30 00:12:02.396261 containerd[1452]: time="2025-04-30T00:12:02.396039500Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" returns successfully" Apr 30 00:12:02.396692 containerd[1452]: time="2025-04-30T00:12:02.396565373Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:02.396692 containerd[1452]: time="2025-04-30T00:12:02.396634543Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:02.396692 containerd[1452]: time="2025-04-30T00:12:02.396643224Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:02.398390 systemd[1]: run-netns-cni\x2d368e17d6\x2da546\x2dc4c1\x2d166a\x2d9c6175e9129c.mount: Deactivated successfully. Apr 30 00:12:02.399051 kubelet[2531]: E0430 00:12:02.398744 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:02.399555 containerd[1452]: time="2025-04-30T00:12:02.399181337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:2,}" Apr 30 00:12:02.399713 kubelet[2531]: I0430 00:12:02.399667 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5" Apr 30 00:12:02.400705 containerd[1452]: time="2025-04-30T00:12:02.400430270Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:02.400705 containerd[1452]: time="2025-04-30T00:12:02.400608735Z" level=info msg="Ensure that sandbox a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5 in task-service has been cleanup successfully" Apr 30 00:12:02.400861 containerd[1452]: time="2025-04-30T00:12:02.400806642Z" level=info msg="TearDown network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" successfully" Apr 30 00:12:02.400861 containerd[1452]: time="2025-04-30T00:12:02.400829125Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" returns successfully" Apr 30 00:12:02.401252 containerd[1452]: time="2025-04-30T00:12:02.401225940Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:02.401323 containerd[1452]: time="2025-04-30T00:12:02.401306792Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:02.401323 containerd[1452]: time="2025-04-30T00:12:02.401319273Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:02.401869 kubelet[2531]: I0430 00:12:02.401843 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098" Apr 30 00:12:02.402244 containerd[1452]: time="2025-04-30T00:12:02.402213157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:2,}" Apr 30 00:12:02.402299 containerd[1452]: time="2025-04-30T00:12:02.402248722Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:02.402964 containerd[1452]: time="2025-04-30T00:12:02.402932857Z" level=info msg="Ensure that sandbox 4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098 in task-service has been cleanup successfully" Apr 30 00:12:02.403278 containerd[1452]: time="2025-04-30T00:12:02.403104561Z" level=info msg="TearDown network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" successfully" Apr 30 00:12:02.403278 containerd[1452]: time="2025-04-30T00:12:02.403123044Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" returns successfully" Apr 30 00:12:02.403456 containerd[1452]: time="2025-04-30T00:12:02.403330352Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:02.403456 containerd[1452]: time="2025-04-30T00:12:02.403397322Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:02.403456 containerd[1452]: time="2025-04-30T00:12:02.403408323Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:02.405266 kubelet[2531]: I0430 00:12:02.404838 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e" Apr 30 00:12:02.405590 containerd[1452]: time="2025-04-30T00:12:02.405559702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:2,}" Apr 30 00:12:02.405650 containerd[1452]: time="2025-04-30T00:12:02.405623871Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:02.406040 containerd[1452]: time="2025-04-30T00:12:02.405758930Z" level=info msg="Ensure that sandbox f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e in task-service has been cleanup successfully" Apr 30 00:12:02.406040 containerd[1452]: time="2025-04-30T00:12:02.405919552Z" level=info msg="TearDown network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" successfully" Apr 30 00:12:02.406040 containerd[1452]: time="2025-04-30T00:12:02.405931914Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" returns successfully" Apr 30 00:12:02.406805 containerd[1452]: time="2025-04-30T00:12:02.406214913Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:02.406805 containerd[1452]: time="2025-04-30T00:12:02.406298644Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:02.406805 containerd[1452]: time="2025-04-30T00:12:02.406309286Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:02.407280 systemd[1]: run-netns-cni\x2d3f514b67\x2d2070\x2d613b\x2d174a\x2d39092450f17d.mount: Deactivated successfully. Apr 30 00:12:02.408077 containerd[1452]: time="2025-04-30T00:12:02.407852100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:2,}" Apr 30 00:12:02.409561 kubelet[2531]: I0430 00:12:02.409270 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239" Apr 30 00:12:02.411420 kubelet[2531]: I0430 00:12:02.410860 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2" Apr 30 00:12:02.411521 systemd[1]: run-netns-cni\x2d1c1fb3fe\x2d8284\x2d9465\x2d6fda\x2db7bf17fbf53e.mount: Deactivated successfully. Apr 30 00:12:02.412396 containerd[1452]: time="2025-04-30T00:12:02.411644867Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:02.412396 containerd[1452]: time="2025-04-30T00:12:02.411708075Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:02.412396 containerd[1452]: time="2025-04-30T00:12:02.411815210Z" level=info msg="Ensure that sandbox 04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239 in task-service has been cleanup successfully" Apr 30 00:12:02.412396 containerd[1452]: time="2025-04-30T00:12:02.412018918Z" level=info msg="Ensure that sandbox 800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2 in task-service has been cleanup successfully" Apr 30 00:12:02.411619 systemd[1]: run-netns-cni\x2de3dfc889\x2d5693\x2d4075\x2d30fb\x2d1c9c0e27786f.mount: Deactivated successfully. Apr 30 00:12:02.413008 containerd[1452]: time="2025-04-30T00:12:02.412870077Z" level=info msg="TearDown network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" successfully" Apr 30 00:12:02.413008 containerd[1452]: time="2025-04-30T00:12:02.412919804Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" returns successfully" Apr 30 00:12:02.413008 containerd[1452]: time="2025-04-30T00:12:02.412934086Z" level=info msg="TearDown network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" successfully" Apr 30 00:12:02.413008 containerd[1452]: time="2025-04-30T00:12:02.412953688Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" returns successfully" Apr 30 00:12:02.413818 containerd[1452]: time="2025-04-30T00:12:02.413529488Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:02.413818 containerd[1452]: time="2025-04-30T00:12:02.413619221Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:02.413818 containerd[1452]: time="2025-04-30T00:12:02.413628502Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:02.413818 containerd[1452]: time="2025-04-30T00:12:02.413634103Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:02.413927 kubelet[2531]: E0430 00:12:02.413798 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:02.414433 systemd[1]: run-netns-cni\x2d57f0b676\x2d2873\x2d1e43\x2d8d8b\x2dc9b7e3272f9c.mount: Deactivated successfully. Apr 30 00:12:02.414512 systemd[1]: run-netns-cni\x2d4460f5b4\x2d50c2\x2d3a5b\x2d2798\x2dd9120f5479fb.mount: Deactivated successfully. Apr 30 00:12:02.416731 containerd[1452]: time="2025-04-30T00:12:02.414949845Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:02.416731 containerd[1452]: time="2025-04-30T00:12:02.414977209Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:02.416731 containerd[1452]: time="2025-04-30T00:12:02.415118229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:2,}" Apr 30 00:12:02.416731 containerd[1452]: time="2025-04-30T00:12:02.415856091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:2,}" Apr 30 00:12:02.695296 containerd[1452]: time="2025-04-30T00:12:02.695106013Z" level=error msg="Failed to destroy network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.695761 containerd[1452]: time="2025-04-30T00:12:02.695482386Z" level=error msg="encountered an error cleaning up failed sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.695761 containerd[1452]: time="2025-04-30T00:12:02.695538313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.696386 kubelet[2531]: E0430 00:12:02.696054 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.696386 kubelet[2531]: E0430 00:12:02.696246 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:02.696386 kubelet[2531]: E0430 00:12:02.696273 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:02.696811 kubelet[2531]: E0430 00:12:02.696773 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fmpt9" podUID="18e58184-40ff-4d1f-8487-cc0971208414" Apr 30 00:12:02.703638 containerd[1452]: time="2025-04-30T00:12:02.703341757Z" level=error msg="Failed to destroy network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.703776 containerd[1452]: time="2025-04-30T00:12:02.703754694Z" level=error msg="encountered an error cleaning up failed sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.703838 containerd[1452]: time="2025-04-30T00:12:02.703811702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.704095 kubelet[2531]: E0430 00:12:02.704042 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.704174 kubelet[2531]: E0430 00:12:02.704106 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:02.704174 kubelet[2531]: E0430 00:12:02.704127 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:02.704227 kubelet[2531]: E0430 00:12:02.704163 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sczzd" podUID="f10de97b-7bb8-4314-ad06-307db947bbd3" Apr 30 00:12:02.728760 containerd[1452]: time="2025-04-30T00:12:02.728698156Z" level=error msg="Failed to destroy network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.729541 containerd[1452]: time="2025-04-30T00:12:02.729488986Z" level=error msg="encountered an error cleaning up failed sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.729631 containerd[1452]: time="2025-04-30T00:12:02.729561996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.729826 kubelet[2531]: E0430 00:12:02.729784 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.729869 kubelet[2531]: E0430 00:12:02.729849 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:02.729904 kubelet[2531]: E0430 00:12:02.729869 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:02.729947 kubelet[2531]: E0430 00:12:02.729915 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" podUID="d7f443d6-1623-481d-ac5b-c37dd6fd2c49" Apr 30 00:12:02.822162 containerd[1452]: time="2025-04-30T00:12:02.822027671Z" level=error msg="Failed to destroy network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.822674 containerd[1452]: time="2025-04-30T00:12:02.822556665Z" level=error msg="encountered an error cleaning up failed sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.822674 containerd[1452]: time="2025-04-30T00:12:02.822624794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.822855 containerd[1452]: time="2025-04-30T00:12:02.822760053Z" level=error msg="Failed to destroy network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.822906 kubelet[2531]: E0430 00:12:02.822833 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.822906 kubelet[2531]: E0430 00:12:02.822898 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:02.822978 kubelet[2531]: E0430 00:12:02.822917 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:02.823007 kubelet[2531]: E0430 00:12:02.822975 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:12:02.823069 containerd[1452]: time="2025-04-30T00:12:02.823010648Z" level=error msg="encountered an error cleaning up failed sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.823094 containerd[1452]: time="2025-04-30T00:12:02.823051733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.827075 kubelet[2531]: E0430 00:12:02.823762 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.827075 kubelet[2531]: E0430 00:12:02.823797 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:02.827075 kubelet[2531]: E0430 00:12:02.823810 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:02.827373 kubelet[2531]: E0430 00:12:02.823837 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" podUID="ec608e6a-3467-4db3-99e7-9ccb6cf924be" Apr 30 00:12:02.835867 containerd[1452]: time="2025-04-30T00:12:02.835817225Z" level=error msg="Failed to destroy network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.837945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9-shm.mount: Deactivated successfully. Apr 30 00:12:02.838800 containerd[1452]: time="2025-04-30T00:12:02.838758834Z" level=error msg="encountered an error cleaning up failed sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.838858 containerd[1452]: time="2025-04-30T00:12:02.838840885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.839098 kubelet[2531]: E0430 00:12:02.839059 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:02.839153 kubelet[2531]: E0430 00:12:02.839118 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:02.839153 kubelet[2531]: E0430 00:12:02.839137 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:02.839254 kubelet[2531]: E0430 00:12:02.839177 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" podUID="045c28bd-5788-44ed-ad00-a7c01b791cf2" Apr 30 00:12:03.415151 kubelet[2531]: I0430 00:12:03.414775 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6" Apr 30 00:12:03.415817 containerd[1452]: time="2025-04-30T00:12:03.415779928Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" Apr 30 00:12:03.416043 containerd[1452]: time="2025-04-30T00:12:03.415958432Z" level=info msg="Ensure that sandbox d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6 in task-service has been cleanup successfully" Apr 30 00:12:03.416179 containerd[1452]: time="2025-04-30T00:12:03.416160858Z" level=info msg="TearDown network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" successfully" Apr 30 00:12:03.416209 containerd[1452]: time="2025-04-30T00:12:03.416178100Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" returns successfully" Apr 30 00:12:03.416774 containerd[1452]: time="2025-04-30T00:12:03.416749095Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:03.416846 containerd[1452]: time="2025-04-30T00:12:03.416832225Z" level=info msg="TearDown network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" successfully" Apr 30 00:12:03.416875 containerd[1452]: time="2025-04-30T00:12:03.416845907Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" returns successfully" Apr 30 00:12:03.417960 containerd[1452]: time="2025-04-30T00:12:03.417926888Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:03.418045 containerd[1452]: time="2025-04-30T00:12:03.418029141Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:03.418076 containerd[1452]: time="2025-04-30T00:12:03.418045303Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:03.419466 containerd[1452]: time="2025-04-30T00:12:03.419426203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:3,}" Apr 30 00:12:03.420436 kubelet[2531]: I0430 00:12:03.419917 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef" Apr 30 00:12:03.420672 containerd[1452]: time="2025-04-30T00:12:03.420629720Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" Apr 30 00:12:03.420822 containerd[1452]: time="2025-04-30T00:12:03.420805022Z" level=info msg="Ensure that sandbox 2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef in task-service has been cleanup successfully" Apr 30 00:12:03.422621 systemd[1]: run-netns-cni\x2db1771ebc\x2de94f\x2d3b24\x2d5f5d\x2d16fbf44c6132.mount: Deactivated successfully. Apr 30 00:12:03.423868 containerd[1452]: time="2025-04-30T00:12:03.423834777Z" level=info msg="TearDown network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" successfully" Apr 30 00:12:03.423921 containerd[1452]: time="2025-04-30T00:12:03.423870181Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" returns successfully" Apr 30 00:12:03.424528 kubelet[2531]: I0430 00:12:03.424404 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b" Apr 30 00:12:03.425170 containerd[1452]: time="2025-04-30T00:12:03.424997048Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" Apr 30 00:12:03.425264 containerd[1452]: time="2025-04-30T00:12:03.425177151Z" level=info msg="Ensure that sandbox 7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b in task-service has been cleanup successfully" Apr 30 00:12:03.425464 containerd[1452]: time="2025-04-30T00:12:03.425422503Z" level=info msg="TearDown network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" successfully" Apr 30 00:12:03.425464 containerd[1452]: time="2025-04-30T00:12:03.425442426Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" returns successfully" Apr 30 00:12:03.425594 containerd[1452]: time="2025-04-30T00:12:03.425009530Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:03.425594 containerd[1452]: time="2025-04-30T00:12:03.425553920Z" level=info msg="TearDown network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" successfully" Apr 30 00:12:03.425594 containerd[1452]: time="2025-04-30T00:12:03.425562361Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" returns successfully" Apr 30 00:12:03.425820 systemd[1]: run-netns-cni\x2d6af9b3c3\x2df01a\x2d983c\x2dce59\x2da5261cac4003.mount: Deactivated successfully. Apr 30 00:12:03.426706 containerd[1452]: time="2025-04-30T00:12:03.426178402Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:03.426706 containerd[1452]: time="2025-04-30T00:12:03.426270574Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:03.426706 containerd[1452]: time="2025-04-30T00:12:03.426280375Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:03.426706 containerd[1452]: time="2025-04-30T00:12:03.426333142Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:03.426706 containerd[1452]: time="2025-04-30T00:12:03.426393510Z" level=info msg="TearDown network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" successfully" Apr 30 00:12:03.426706 containerd[1452]: time="2025-04-30T00:12:03.426402351Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" returns successfully" Apr 30 00:12:03.427115 containerd[1452]: time="2025-04-30T00:12:03.427089240Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:03.427998 containerd[1452]: time="2025-04-30T00:12:03.427255942Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:03.427998 containerd[1452]: time="2025-04-30T00:12:03.427271664Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:03.427998 containerd[1452]: time="2025-04-30T00:12:03.427378798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:3,}" Apr 30 00:12:03.428264 systemd[1]: run-netns-cni\x2ddf054683\x2d522b\x2dfbaa\x2d0dc4\x2d2a5eb52cfc7b.mount: Deactivated successfully. Apr 30 00:12:03.428561 kubelet[2531]: I0430 00:12:03.428540 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31" Apr 30 00:12:03.428701 containerd[1452]: time="2025-04-30T00:12:03.428663325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:3,}" Apr 30 00:12:03.430327 containerd[1452]: time="2025-04-30T00:12:03.430291857Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" Apr 30 00:12:03.430481 containerd[1452]: time="2025-04-30T00:12:03.430456438Z" level=info msg="Ensure that sandbox 6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31 in task-service has been cleanup successfully" Apr 30 00:12:03.432505 containerd[1452]: time="2025-04-30T00:12:03.431379398Z" level=info msg="TearDown network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" successfully" Apr 30 00:12:03.432505 containerd[1452]: time="2025-04-30T00:12:03.432498584Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" returns successfully" Apr 30 00:12:03.433278 containerd[1452]: time="2025-04-30T00:12:03.433245481Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:03.433381 containerd[1452]: time="2025-04-30T00:12:03.433368297Z" level=info msg="TearDown network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" successfully" Apr 30 00:12:03.433428 containerd[1452]: time="2025-04-30T00:12:03.433380419Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" returns successfully" Apr 30 00:12:03.433922 containerd[1452]: time="2025-04-30T00:12:03.433875443Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:03.433995 containerd[1452]: time="2025-04-30T00:12:03.433977457Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:03.433995 containerd[1452]: time="2025-04-30T00:12:03.433991738Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:03.434431 kubelet[2531]: E0430 00:12:03.434220 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:03.436304 containerd[1452]: time="2025-04-30T00:12:03.436271475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:3,}" Apr 30 00:12:03.437264 containerd[1452]: time="2025-04-30T00:12:03.437118825Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" Apr 30 00:12:03.437334 kubelet[2531]: I0430 00:12:03.436526 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43" Apr 30 00:12:03.437414 containerd[1452]: time="2025-04-30T00:12:03.437306130Z" level=info msg="Ensure that sandbox e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43 in task-service has been cleanup successfully" Apr 30 00:12:03.437931 containerd[1452]: time="2025-04-30T00:12:03.437864122Z" level=info msg="TearDown network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" successfully" Apr 30 00:12:03.437931 containerd[1452]: time="2025-04-30T00:12:03.437917449Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" returns successfully" Apr 30 00:12:03.438288 containerd[1452]: time="2025-04-30T00:12:03.438145479Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:03.438288 containerd[1452]: time="2025-04-30T00:12:03.438229450Z" level=info msg="TearDown network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" successfully" Apr 30 00:12:03.438288 containerd[1452]: time="2025-04-30T00:12:03.438238811Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" returns successfully" Apr 30 00:12:03.438570 containerd[1452]: time="2025-04-30T00:12:03.438543611Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:03.438654 containerd[1452]: time="2025-04-30T00:12:03.438640863Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:03.438690 containerd[1452]: time="2025-04-30T00:12:03.438654225Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:03.439384 kubelet[2531]: E0430 00:12:03.438859 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:03.439384 kubelet[2531]: I0430 00:12:03.438979 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9" Apr 30 00:12:03.439488 containerd[1452]: time="2025-04-30T00:12:03.439155930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:3,}" Apr 30 00:12:03.439488 containerd[1452]: time="2025-04-30T00:12:03.439461490Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" Apr 30 00:12:03.440056 containerd[1452]: time="2025-04-30T00:12:03.439610790Z" level=info msg="Ensure that sandbox fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9 in task-service has been cleanup successfully" Apr 30 00:12:03.440056 containerd[1452]: time="2025-04-30T00:12:03.439925751Z" level=info msg="TearDown network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" successfully" Apr 30 00:12:03.440056 containerd[1452]: time="2025-04-30T00:12:03.440001560Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" returns successfully" Apr 30 00:12:03.441354 containerd[1452]: time="2025-04-30T00:12:03.441319092Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:03.441431 containerd[1452]: time="2025-04-30T00:12:03.441418265Z" level=info msg="TearDown network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" successfully" Apr 30 00:12:03.441458 containerd[1452]: time="2025-04-30T00:12:03.441428346Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" returns successfully" Apr 30 00:12:03.442087 containerd[1452]: time="2025-04-30T00:12:03.442059148Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:03.442165 containerd[1452]: time="2025-04-30T00:12:03.442148240Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:03.442165 containerd[1452]: time="2025-04-30T00:12:03.442159881Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:03.442698 containerd[1452]: time="2025-04-30T00:12:03.442652745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:3,}" Apr 30 00:12:03.575627 containerd[1452]: time="2025-04-30T00:12:03.575574443Z" level=error msg="Failed to destroy network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.576000 containerd[1452]: time="2025-04-30T00:12:03.575958533Z" level=error msg="encountered an error cleaning up failed sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.576052 containerd[1452]: time="2025-04-30T00:12:03.576020541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.576279 kubelet[2531]: E0430 00:12:03.576245 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.576717 kubelet[2531]: E0430 00:12:03.576493 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:03.576717 kubelet[2531]: E0430 00:12:03.576518 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:03.576717 kubelet[2531]: E0430 00:12:03.576561 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:12:03.581418 containerd[1452]: time="2025-04-30T00:12:03.581248542Z" level=error msg="Failed to destroy network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.581655 containerd[1452]: time="2025-04-30T00:12:03.581624430Z" level=error msg="encountered an error cleaning up failed sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.581762 containerd[1452]: time="2025-04-30T00:12:03.581711042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.582319 kubelet[2531]: E0430 00:12:03.581923 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.582319 kubelet[2531]: E0430 00:12:03.582022 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:03.582319 kubelet[2531]: E0430 00:12:03.582052 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:03.582438 kubelet[2531]: E0430 00:12:03.582092 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fmpt9" podUID="18e58184-40ff-4d1f-8487-cc0971208414" Apr 30 00:12:03.733974 containerd[1452]: time="2025-04-30T00:12:03.733923090Z" level=error msg="Failed to destroy network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.734978 containerd[1452]: time="2025-04-30T00:12:03.734942062Z" level=error msg="encountered an error cleaning up failed sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.735060 containerd[1452]: time="2025-04-30T00:12:03.735020673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.735305 kubelet[2531]: E0430 00:12:03.735256 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.735370 kubelet[2531]: E0430 00:12:03.735327 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:03.735476 kubelet[2531]: E0430 00:12:03.735411 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:03.735506 kubelet[2531]: E0430 00:12:03.735466 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" podUID="ec608e6a-3467-4db3-99e7-9ccb6cf924be" Apr 30 00:12:03.749490 containerd[1452]: time="2025-04-30T00:12:03.749136310Z" level=error msg="Failed to destroy network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.749812 containerd[1452]: time="2025-04-30T00:12:03.749779753Z" level=error msg="encountered an error cleaning up failed sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.749899 containerd[1452]: time="2025-04-30T00:12:03.749844722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.750458 kubelet[2531]: E0430 00:12:03.750099 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.750458 kubelet[2531]: E0430 00:12:03.750157 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:03.750458 kubelet[2531]: E0430 00:12:03.750182 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:03.751819 kubelet[2531]: E0430 00:12:03.750219 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" podUID="045c28bd-5788-44ed-ad00-a7c01b791cf2" Apr 30 00:12:03.751877 containerd[1452]: time="2025-04-30T00:12:03.751390003Z" level=error msg="Failed to destroy network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.751877 containerd[1452]: time="2025-04-30T00:12:03.751770132Z" level=error msg="encountered an error cleaning up failed sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.751877 containerd[1452]: time="2025-04-30T00:12:03.751821219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.752024 kubelet[2531]: E0430 00:12:03.751976 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.752102 kubelet[2531]: E0430 00:12:03.752080 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:03.752139 kubelet[2531]: E0430 00:12:03.752104 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:03.752166 kubelet[2531]: E0430 00:12:03.752147 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sczzd" podUID="f10de97b-7bb8-4314-ad06-307db947bbd3" Apr 30 00:12:03.756412 containerd[1452]: time="2025-04-30T00:12:03.756368451Z" level=error msg="Failed to destroy network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.757003 containerd[1452]: time="2025-04-30T00:12:03.756878637Z" level=error msg="encountered an error cleaning up failed sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.757159 containerd[1452]: time="2025-04-30T00:12:03.756951807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.757348 kubelet[2531]: E0430 00:12:03.757313 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:03.757754 kubelet[2531]: E0430 00:12:03.757589 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:03.757754 kubelet[2531]: E0430 00:12:03.757621 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:03.757754 kubelet[2531]: E0430 00:12:03.757702 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" podUID="d7f443d6-1623-481d-ac5b-c37dd6fd2c49" Apr 30 00:12:03.827521 systemd[1]: run-netns-cni\x2d70b82498\x2d20cb\x2d3830\x2dccf4\x2d274032fc1e39.mount: Deactivated successfully. Apr 30 00:12:03.827609 systemd[1]: run-netns-cni\x2de100d3ee\x2dad90\x2d30fe\x2dd2ae\x2d2ae4fe1de7d9.mount: Deactivated successfully. Apr 30 00:12:03.827655 systemd[1]: run-netns-cni\x2d416dcae4\x2d9190\x2d548c\x2d5bf8\x2d043cbeeec6fa.mount: Deactivated successfully. Apr 30 00:12:04.442920 kubelet[2531]: I0430 00:12:04.442818 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170" Apr 30 00:12:04.444602 containerd[1452]: time="2025-04-30T00:12:04.444191473Z" level=info msg="StopPodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\"" Apr 30 00:12:04.444602 containerd[1452]: time="2025-04-30T00:12:04.444418061Z" level=info msg="Ensure that sandbox 4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170 in task-service has been cleanup successfully" Apr 30 00:12:04.446746 containerd[1452]: time="2025-04-30T00:12:04.446579885Z" level=info msg="TearDown network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" successfully" Apr 30 00:12:04.446746 containerd[1452]: time="2025-04-30T00:12:04.446611808Z" level=info msg="StopPodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" returns successfully" Apr 30 00:12:04.447927 systemd[1]: run-netns-cni\x2de6ce639b\x2da351\x2d619b\x2d8f5e\x2d7c814a3e7931.mount: Deactivated successfully. Apr 30 00:12:04.451872 containerd[1452]: time="2025-04-30T00:12:04.450954938Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" Apr 30 00:12:04.451872 containerd[1452]: time="2025-04-30T00:12:04.451101596Z" level=info msg="TearDown network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" successfully" Apr 30 00:12:04.451872 containerd[1452]: time="2025-04-30T00:12:04.451114878Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" returns successfully" Apr 30 00:12:04.452182 containerd[1452]: time="2025-04-30T00:12:04.452133002Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:04.452293 containerd[1452]: time="2025-04-30T00:12:04.452274859Z" level=info msg="TearDown network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" successfully" Apr 30 00:12:04.452293 containerd[1452]: time="2025-04-30T00:12:04.452289981Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" returns successfully" Apr 30 00:12:04.454193 containerd[1452]: time="2025-04-30T00:12:04.454151488Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:04.454301 containerd[1452]: time="2025-04-30T00:12:04.454285025Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:04.454327 containerd[1452]: time="2025-04-30T00:12:04.454297866Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:04.455276 kubelet[2531]: I0430 00:12:04.454697 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b" Apr 30 00:12:04.455365 containerd[1452]: time="2025-04-30T00:12:04.454837772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:4,}" Apr 30 00:12:04.457805 containerd[1452]: time="2025-04-30T00:12:04.457769090Z" level=info msg="StopPodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\"" Apr 30 00:12:04.457992 containerd[1452]: time="2025-04-30T00:12:04.457973155Z" level=info msg="Ensure that sandbox f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b in task-service has been cleanup successfully" Apr 30 00:12:04.462734 systemd[1]: run-netns-cni\x2d82b93fd6\x2d0630\x2d2451\x2db76a\x2d1518d6a6bc39.mount: Deactivated successfully. Apr 30 00:12:04.463637 containerd[1452]: time="2025-04-30T00:12:04.463587800Z" level=info msg="TearDown network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" successfully" Apr 30 00:12:04.463637 containerd[1452]: time="2025-04-30T00:12:04.463628645Z" level=info msg="StopPodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" returns successfully" Apr 30 00:12:04.464285 containerd[1452]: time="2025-04-30T00:12:04.464248280Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" Apr 30 00:12:04.464710 containerd[1452]: time="2025-04-30T00:12:04.464583121Z" level=info msg="TearDown network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" successfully" Apr 30 00:12:04.464710 containerd[1452]: time="2025-04-30T00:12:04.464602003Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" returns successfully" Apr 30 00:12:04.465117 containerd[1452]: time="2025-04-30T00:12:04.465069740Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:04.465209 containerd[1452]: time="2025-04-30T00:12:04.465192475Z" level=info msg="TearDown network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" successfully" Apr 30 00:12:04.465238 containerd[1452]: time="2025-04-30T00:12:04.465207877Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" returns successfully" Apr 30 00:12:04.466498 containerd[1452]: time="2025-04-30T00:12:04.465520995Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:04.466498 containerd[1452]: time="2025-04-30T00:12:04.465632649Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:04.466498 containerd[1452]: time="2025-04-30T00:12:04.465643530Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:04.466498 containerd[1452]: time="2025-04-30T00:12:04.466193317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:4,}" Apr 30 00:12:04.566821 kubelet[2531]: I0430 00:12:04.565698 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9" Apr 30 00:12:04.569232 containerd[1452]: time="2025-04-30T00:12:04.567903286Z" level=info msg="StopPodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\"" Apr 30 00:12:04.569232 containerd[1452]: time="2025-04-30T00:12:04.568073747Z" level=info msg="Ensure that sandbox 1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9 in task-service has been cleanup successfully" Apr 30 00:12:04.578698 kubelet[2531]: I0430 00:12:04.578504 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f" Apr 30 00:12:04.584159 containerd[1452]: time="2025-04-30T00:12:04.584107423Z" level=info msg="StopPodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\"" Apr 30 00:12:04.587777 containerd[1452]: time="2025-04-30T00:12:04.585632769Z" level=info msg="Ensure that sandbox 66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f in task-service has been cleanup successfully" Apr 30 00:12:04.587858 systemd[1]: run-netns-cni\x2daa03210c\x2df96e\x2d6113\x2d870c\x2d221a2050c71e.mount: Deactivated successfully. Apr 30 00:12:04.588858 containerd[1452]: time="2025-04-30T00:12:04.588520762Z" level=info msg="TearDown network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" successfully" Apr 30 00:12:04.588858 containerd[1452]: time="2025-04-30T00:12:04.588549525Z" level=info msg="StopPodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" returns successfully" Apr 30 00:12:04.591633 containerd[1452]: time="2025-04-30T00:12:04.591585015Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" Apr 30 00:12:04.592039 containerd[1452]: time="2025-04-30T00:12:04.592011428Z" level=info msg="TearDown network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" successfully" Apr 30 00:12:04.592039 containerd[1452]: time="2025-04-30T00:12:04.592038631Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" returns successfully" Apr 30 00:12:04.595441 containerd[1452]: time="2025-04-30T00:12:04.595380359Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:04.595557 containerd[1452]: time="2025-04-30T00:12:04.595537898Z" level=info msg="TearDown network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" successfully" Apr 30 00:12:04.595584 containerd[1452]: time="2025-04-30T00:12:04.595555140Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" returns successfully" Apr 30 00:12:04.599164 kubelet[2531]: I0430 00:12:04.599096 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d" Apr 30 00:12:04.600179 containerd[1452]: time="2025-04-30T00:12:04.599945075Z" level=info msg="StopPodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\"" Apr 30 00:12:04.600179 containerd[1452]: time="2025-04-30T00:12:04.600121977Z" level=info msg="Ensure that sandbox e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d in task-service has been cleanup successfully" Apr 30 00:12:04.602610 systemd[1]: run-netns-cni\x2d0b23dbfc\x2d0386\x2d5eb5\x2deec6\x2d8b8afd92be9f.mount: Deactivated successfully. Apr 30 00:12:04.603323 containerd[1452]: time="2025-04-30T00:12:04.603277962Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:04.603413 containerd[1452]: time="2025-04-30T00:12:04.603392696Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:04.603413 containerd[1452]: time="2025-04-30T00:12:04.603409018Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:04.605802 containerd[1452]: time="2025-04-30T00:12:04.605767066Z" level=info msg="TearDown network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" successfully" Apr 30 00:12:04.605802 containerd[1452]: time="2025-04-30T00:12:04.605798190Z" level=info msg="StopPodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" returns successfully" Apr 30 00:12:04.606037 kubelet[2531]: E0430 00:12:04.605998 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:04.607081 containerd[1452]: time="2025-04-30T00:12:04.606950810Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" Apr 30 00:12:04.607166 containerd[1452]: time="2025-04-30T00:12:04.607112430Z" level=info msg="TearDown network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" successfully" Apr 30 00:12:04.607166 containerd[1452]: time="2025-04-30T00:12:04.607128352Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" returns successfully" Apr 30 00:12:04.607637 containerd[1452]: time="2025-04-30T00:12:04.607477034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:4,}" Apr 30 00:12:04.609298 containerd[1452]: time="2025-04-30T00:12:04.609265533Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:04.609502 containerd[1452]: time="2025-04-30T00:12:04.609472958Z" level=info msg="TearDown network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" successfully" Apr 30 00:12:04.609502 containerd[1452]: time="2025-04-30T00:12:04.609494721Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" returns successfully" Apr 30 00:12:04.609895 containerd[1452]: time="2025-04-30T00:12:04.609861725Z" level=info msg="TearDown network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" successfully" Apr 30 00:12:04.609895 containerd[1452]: time="2025-04-30T00:12:04.609888089Z" level=info msg="StopPodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" returns successfully" Apr 30 00:12:04.612901 containerd[1452]: time="2025-04-30T00:12:04.612344748Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" Apr 30 00:12:04.612901 containerd[1452]: time="2025-04-30T00:12:04.612463563Z" level=info msg="TearDown network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" successfully" Apr 30 00:12:04.612901 containerd[1452]: time="2025-04-30T00:12:04.612474244Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" returns successfully" Apr 30 00:12:04.612901 containerd[1452]: time="2025-04-30T00:12:04.612542332Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:04.612901 containerd[1452]: time="2025-04-30T00:12:04.612606180Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:04.612901 containerd[1452]: time="2025-04-30T00:12:04.612615301Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:04.617708 containerd[1452]: time="2025-04-30T00:12:04.615639710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:4,}" Apr 30 00:12:04.617708 containerd[1452]: time="2025-04-30T00:12:04.615695197Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:04.617708 containerd[1452]: time="2025-04-30T00:12:04.615781327Z" level=info msg="TearDown network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" successfully" Apr 30 00:12:04.617708 containerd[1452]: time="2025-04-30T00:12:04.615790569Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" returns successfully" Apr 30 00:12:04.618756 containerd[1452]: time="2025-04-30T00:12:04.618310156Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:04.618756 containerd[1452]: time="2025-04-30T00:12:04.618476936Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:04.618756 containerd[1452]: time="2025-04-30T00:12:04.618490658Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:04.619445 kubelet[2531]: I0430 00:12:04.619053 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a" Apr 30 00:12:04.619874 containerd[1452]: time="2025-04-30T00:12:04.619835462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:4,}" Apr 30 00:12:04.619988 containerd[1452]: time="2025-04-30T00:12:04.619958117Z" level=info msg="StopPodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\"" Apr 30 00:12:04.621233 containerd[1452]: time="2025-04-30T00:12:04.621183547Z" level=info msg="Ensure that sandbox 294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a in task-service has been cleanup successfully" Apr 30 00:12:04.621597 containerd[1452]: time="2025-04-30T00:12:04.621513707Z" level=info msg="TearDown network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" successfully" Apr 30 00:12:04.621597 containerd[1452]: time="2025-04-30T00:12:04.621535389Z" level=info msg="StopPodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" returns successfully" Apr 30 00:12:04.623314 containerd[1452]: time="2025-04-30T00:12:04.623134545Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" Apr 30 00:12:04.623314 containerd[1452]: time="2025-04-30T00:12:04.623239197Z" level=info msg="TearDown network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" successfully" Apr 30 00:12:04.623314 containerd[1452]: time="2025-04-30T00:12:04.623250679Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" returns successfully" Apr 30 00:12:04.624765 containerd[1452]: time="2025-04-30T00:12:04.623578839Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:04.624765 containerd[1452]: time="2025-04-30T00:12:04.623656648Z" level=info msg="TearDown network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" successfully" Apr 30 00:12:04.624765 containerd[1452]: time="2025-04-30T00:12:04.623667330Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" returns successfully" Apr 30 00:12:04.624765 containerd[1452]: time="2025-04-30T00:12:04.623977927Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:04.624765 containerd[1452]: time="2025-04-30T00:12:04.624061738Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:04.624765 containerd[1452]: time="2025-04-30T00:12:04.624071419Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:04.625913 kubelet[2531]: E0430 00:12:04.624746 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:04.627556 containerd[1452]: time="2025-04-30T00:12:04.627408186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:4,}" Apr 30 00:12:04.739993 containerd[1452]: time="2025-04-30T00:12:04.739923553Z" level=error msg="Failed to destroy network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.747084 containerd[1452]: time="2025-04-30T00:12:04.747021899Z" level=error msg="encountered an error cleaning up failed sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.747227 containerd[1452]: time="2025-04-30T00:12:04.747117791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.747416 kubelet[2531]: E0430 00:12:04.747364 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.747470 kubelet[2531]: E0430 00:12:04.747435 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:04.747470 kubelet[2531]: E0430 00:12:04.747455 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9r9d" Apr 30 00:12:04.747527 kubelet[2531]: E0430 00:12:04.747496 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9r9d_calico-system(c26bbdc1-7849-41b7-afe2-12b01cd7b775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9r9d" podUID="c26bbdc1-7849-41b7-afe2-12b01cd7b775" Apr 30 00:12:04.758988 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:39658.service - OpenSSH per-connection server daemon (10.0.0.1:39658). Apr 30 00:12:04.765664 containerd[1452]: time="2025-04-30T00:12:04.765616408Z" level=error msg="Failed to destroy network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.766021 containerd[1452]: time="2025-04-30T00:12:04.765938767Z" level=error msg="encountered an error cleaning up failed sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.766056 containerd[1452]: time="2025-04-30T00:12:04.766033459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.766854 kubelet[2531]: E0430 00:12:04.766277 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.766854 kubelet[2531]: E0430 00:12:04.766351 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:04.766854 kubelet[2531]: E0430 00:12:04.766373 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sczzd" Apr 30 00:12:04.766991 kubelet[2531]: E0430 00:12:04.766425 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sczzd_kube-system(f10de97b-7bb8-4314-ad06-307db947bbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sczzd" podUID="f10de97b-7bb8-4314-ad06-307db947bbd3" Apr 30 00:12:04.790985 containerd[1452]: time="2025-04-30T00:12:04.790914014Z" level=error msg="Failed to destroy network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.791318 containerd[1452]: time="2025-04-30T00:12:04.791280099Z" level=error msg="encountered an error cleaning up failed sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.791360 containerd[1452]: time="2025-04-30T00:12:04.791341946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.791656 kubelet[2531]: E0430 00:12:04.791613 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.791742 kubelet[2531]: E0430 00:12:04.791674 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:04.791784 kubelet[2531]: E0430 00:12:04.791750 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" Apr 30 00:12:04.791837 kubelet[2531]: E0430 00:12:04.791812 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-dn8cn_calico-apiserver(d7f443d6-1623-481d-ac5b-c37dd6fd2c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" podUID="d7f443d6-1623-481d-ac5b-c37dd6fd2c49" Apr 30 00:12:04.805108 containerd[1452]: time="2025-04-30T00:12:04.805049499Z" level=error msg="Failed to destroy network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.806120 containerd[1452]: time="2025-04-30T00:12:04.805727301Z" level=error msg="encountered an error cleaning up failed sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.806120 containerd[1452]: time="2025-04-30T00:12:04.805792429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.806242 kubelet[2531]: E0430 00:12:04.806025 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.806242 kubelet[2531]: E0430 00:12:04.806100 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:04.806242 kubelet[2531]: E0430 00:12:04.806122 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fmpt9" Apr 30 00:12:04.806329 kubelet[2531]: E0430 00:12:04.806159 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fmpt9_kube-system(18e58184-40ff-4d1f-8487-cc0971208414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fmpt9" podUID="18e58184-40ff-4d1f-8487-cc0971208414" Apr 30 00:12:04.826718 containerd[1452]: time="2025-04-30T00:12:04.826654094Z" level=error msg="Failed to destroy network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.827662 containerd[1452]: time="2025-04-30T00:12:04.826810793Z" level=error msg="Failed to destroy network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.827662 containerd[1452]: time="2025-04-30T00:12:04.827559525Z" level=error msg="encountered an error cleaning up failed sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.827662 containerd[1452]: time="2025-04-30T00:12:04.827616052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.828356 kubelet[2531]: E0430 00:12:04.827860 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.828356 kubelet[2531]: E0430 00:12:04.827928 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:04.828356 kubelet[2531]: E0430 00:12:04.827949 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" Apr 30 00:12:04.828459 kubelet[2531]: E0430 00:12:04.827998 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd89fb76-spgc5_calico-apiserver(045c28bd-5788-44ed-ad00-a7c01b791cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" podUID="045c28bd-5788-44ed-ad00-a7c01b791cf2" Apr 30 00:12:04.830591 containerd[1452]: time="2025-04-30T00:12:04.829850684Z" level=error msg="encountered an error cleaning up failed sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.830591 containerd[1452]: time="2025-04-30T00:12:04.830212288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.831454 kubelet[2531]: E0430 00:12:04.830443 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:12:04.831454 kubelet[2531]: E0430 00:12:04.830492 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:04.831454 kubelet[2531]: E0430 00:12:04.830514 2531 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" Apr 30 00:12:04.830964 systemd[1]: run-netns-cni\x2dfb3ee21a\x2da91b\x2de4e7\x2d2ca7\x2d7c386081af7f.mount: Deactivated successfully. Apr 30 00:12:04.831611 kubelet[2531]: E0430 00:12:04.830559 2531 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd4f6bb7c-nl4l2_calico-system(ec608e6a-3467-4db3-99e7-9ccb6cf924be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" podUID="ec608e6a-3467-4db3-99e7-9ccb6cf924be" Apr 30 00:12:04.831063 systemd[1]: run-netns-cni\x2d928ff0ee\x2dc20e\x2d9dff\x2d519c\x2db294509aba20.mount: Deactivated successfully. Apr 30 00:12:04.834518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d-shm.mount: Deactivated successfully. Apr 30 00:12:04.834612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62-shm.mount: Deactivated successfully. Apr 30 00:12:04.836932 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 39658 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:04.838720 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:04.843560 systemd-logind[1431]: New session 8 of user core. Apr 30 00:12:04.849877 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:12:04.894004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633319098.mount: Deactivated successfully. Apr 30 00:12:04.920177 containerd[1452]: time="2025-04-30T00:12:04.920126698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:04.921181 containerd[1452]: time="2025-04-30T00:12:04.920972481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:12:04.922727 containerd[1452]: time="2025-04-30T00:12:04.922543033Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:04.925900 containerd[1452]: time="2025-04-30T00:12:04.924516594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:04.925900 containerd[1452]: time="2025-04-30T00:12:04.925751784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.56287054s" Apr 30 00:12:04.925900 containerd[1452]: time="2025-04-30T00:12:04.925781948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:12:04.934912 containerd[1452]: time="2025-04-30T00:12:04.934864296Z" level=info msg="CreateContainer within sandbox \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:12:04.955318 containerd[1452]: time="2025-04-30T00:12:04.955269546Z" level=info msg="CreateContainer within sandbox \"73175c9b415cfedc6f25075bacf8d011fd2192e08c11365de6e15d7ffc4ea222\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"63f05e6be0c49cccc12ea816adf551eab2462299a7190a5e34bcd11898f366ac\"" Apr 30 00:12:04.955791 containerd[1452]: time="2025-04-30T00:12:04.955766926Z" level=info msg="StartContainer for \"63f05e6be0c49cccc12ea816adf551eab2462299a7190a5e34bcd11898f366ac\"" Apr 30 00:12:04.994043 sshd[4393]: Connection closed by 10.0.0.1 port 39658 Apr 30 00:12:04.994372 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:04.999156 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:39658.service: Deactivated successfully. Apr 30 00:12:05.002919 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:12:05.004536 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:12:05.006369 systemd-logind[1431]: Removed session 8. Apr 30 00:12:05.020926 systemd[1]: Started cri-containerd-63f05e6be0c49cccc12ea816adf551eab2462299a7190a5e34bcd11898f366ac.scope - libcontainer container 63f05e6be0c49cccc12ea816adf551eab2462299a7190a5e34bcd11898f366ac. Apr 30 00:12:05.059813 containerd[1452]: time="2025-04-30T00:12:05.059762971Z" level=info msg="StartContainer for \"63f05e6be0c49cccc12ea816adf551eab2462299a7190a5e34bcd11898f366ac\" returns successfully" Apr 30 00:12:05.460359 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:12:05.460746 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:12:05.624426 kubelet[2531]: I0430 00:12:05.624374 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba" Apr 30 00:12:05.626109 containerd[1452]: time="2025-04-30T00:12:05.625193204Z" level=info msg="StopPodSandbox for \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\"" Apr 30 00:12:05.626109 containerd[1452]: time="2025-04-30T00:12:05.625460834Z" level=info msg="Ensure that sandbox a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba in task-service has been cleanup successfully" Apr 30 00:12:05.626109 containerd[1452]: time="2025-04-30T00:12:05.625724745Z" level=info msg="TearDown network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\" successfully" Apr 30 00:12:05.626109 containerd[1452]: time="2025-04-30T00:12:05.625743107Z" level=info msg="StopPodSandbox for \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\" returns successfully" Apr 30 00:12:05.627267 containerd[1452]: time="2025-04-30T00:12:05.626205039Z" level=info msg="StopPodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\"" Apr 30 00:12:05.628434 containerd[1452]: time="2025-04-30T00:12:05.628190787Z" level=info msg="TearDown network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" successfully" Apr 30 00:12:05.628434 containerd[1452]: time="2025-04-30T00:12:05.628315401Z" level=info msg="StopPodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" returns successfully" Apr 30 00:12:05.629139 containerd[1452]: time="2025-04-30T00:12:05.629111092Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" Apr 30 00:12:05.629243 containerd[1452]: time="2025-04-30T00:12:05.629208703Z" level=info msg="TearDown network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" successfully" Apr 30 00:12:05.629243 containerd[1452]: time="2025-04-30T00:12:05.629222425Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" returns successfully" Apr 30 00:12:05.629602 containerd[1452]: time="2025-04-30T00:12:05.629571745Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:05.629918 containerd[1452]: time="2025-04-30T00:12:05.629877820Z" level=info msg="TearDown network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" successfully" Apr 30 00:12:05.629918 containerd[1452]: time="2025-04-30T00:12:05.629904783Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" returns successfully" Apr 30 00:12:05.631516 containerd[1452]: time="2025-04-30T00:12:05.631345547Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:05.632933 containerd[1452]: time="2025-04-30T00:12:05.632814715Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:05.632933 containerd[1452]: time="2025-04-30T00:12:05.632844239Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:05.633222 kubelet[2531]: E0430 00:12:05.633199 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:05.633753 containerd[1452]: time="2025-04-30T00:12:05.633546159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:5,}" Apr 30 00:12:05.659750 kubelet[2531]: I0430 00:12:05.659302 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62" Apr 30 00:12:05.661176 containerd[1452]: time="2025-04-30T00:12:05.660312781Z" level=info msg="StopPodSandbox for \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\"" Apr 30 00:12:05.661176 containerd[1452]: time="2025-04-30T00:12:05.660527085Z" level=info msg="Ensure that sandbox 458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62 in task-service has been cleanup successfully" Apr 30 00:12:05.661958 containerd[1452]: time="2025-04-30T00:12:05.661926685Z" level=info msg="TearDown network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\" successfully" Apr 30 00:12:05.662729 containerd[1452]: time="2025-04-30T00:12:05.662227040Z" level=info msg="StopPodSandbox for \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\" returns successfully" Apr 30 00:12:05.665056 containerd[1452]: time="2025-04-30T00:12:05.664525142Z" level=info msg="StopPodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\"" Apr 30 00:12:05.665573 containerd[1452]: time="2025-04-30T00:12:05.665522417Z" level=info msg="TearDown network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" successfully" Apr 30 00:12:05.665792 containerd[1452]: time="2025-04-30T00:12:05.665543099Z" level=info msg="StopPodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" returns successfully" Apr 30 00:12:05.666765 containerd[1452]: time="2025-04-30T00:12:05.666738116Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" Apr 30 00:12:05.667332 containerd[1452]: time="2025-04-30T00:12:05.666936498Z" level=info msg="TearDown network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" successfully" Apr 30 00:12:05.667332 containerd[1452]: time="2025-04-30T00:12:05.666962741Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" returns successfully" Apr 30 00:12:05.688378 containerd[1452]: time="2025-04-30T00:12:05.688314743Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:05.688507 containerd[1452]: time="2025-04-30T00:12:05.688459520Z" level=info msg="TearDown network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" successfully" Apr 30 00:12:05.688507 containerd[1452]: time="2025-04-30T00:12:05.688474002Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" returns successfully" Apr 30 00:12:05.690561 containerd[1452]: time="2025-04-30T00:12:05.690524196Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:05.690652 containerd[1452]: time="2025-04-30T00:12:05.690629608Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:05.690652 containerd[1452]: time="2025-04-30T00:12:05.690646130Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:05.690961 kubelet[2531]: I0430 00:12:05.690869 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2" Apr 30 00:12:05.693814 containerd[1452]: time="2025-04-30T00:12:05.693765087Z" level=info msg="StopPodSandbox for \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\"" Apr 30 00:12:05.694098 containerd[1452]: time="2025-04-30T00:12:05.694038678Z" level=info msg="Ensure that sandbox 602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2 in task-service has been cleanup successfully" Apr 30 00:12:05.694782 containerd[1452]: time="2025-04-30T00:12:05.694368756Z" level=info msg="TearDown network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\" successfully" Apr 30 00:12:05.694782 containerd[1452]: time="2025-04-30T00:12:05.694392119Z" level=info msg="StopPodSandbox for \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\" returns successfully" Apr 30 00:12:05.695319 containerd[1452]: time="2025-04-30T00:12:05.695277300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:5,}" Apr 30 00:12:05.697136 containerd[1452]: time="2025-04-30T00:12:05.697074545Z" level=info msg="StopPodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\"" Apr 30 00:12:05.697309 containerd[1452]: time="2025-04-30T00:12:05.697198160Z" level=info msg="TearDown network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" successfully" Apr 30 00:12:05.697309 containerd[1452]: time="2025-04-30T00:12:05.697217762Z" level=info msg="StopPodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" returns successfully" Apr 30 00:12:05.700437 containerd[1452]: time="2025-04-30T00:12:05.700399246Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" Apr 30 00:12:05.701304 containerd[1452]: time="2025-04-30T00:12:05.701168334Z" level=info msg="TearDown network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" successfully" Apr 30 00:12:05.701304 containerd[1452]: time="2025-04-30T00:12:05.701289547Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" returns successfully" Apr 30 00:12:05.703839 containerd[1452]: time="2025-04-30T00:12:05.702702989Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:05.703839 containerd[1452]: time="2025-04-30T00:12:05.703425272Z" level=info msg="TearDown network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" successfully" Apr 30 00:12:05.703839 containerd[1452]: time="2025-04-30T00:12:05.703443514Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" returns successfully" Apr 30 00:12:05.708245 containerd[1452]: time="2025-04-30T00:12:05.704079067Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:05.708245 containerd[1452]: time="2025-04-30T00:12:05.704439268Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:05.708245 containerd[1452]: time="2025-04-30T00:12:05.704459870Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:05.708245 containerd[1452]: time="2025-04-30T00:12:05.704996251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:5,}" Apr 30 00:12:05.708577 kubelet[2531]: E0430 00:12:05.704189 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:05.708577 kubelet[2531]: E0430 00:12:05.704648 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:05.710494 kubelet[2531]: I0430 00:12:05.710423 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f" Apr 30 00:12:05.711096 containerd[1452]: time="2025-04-30T00:12:05.710984656Z" level=info msg="StopPodSandbox for \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\"" Apr 30 00:12:05.711335 containerd[1452]: time="2025-04-30T00:12:05.711190920Z" level=info msg="Ensure that sandbox 1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f in task-service has been cleanup successfully" Apr 30 00:12:05.711691 containerd[1452]: time="2025-04-30T00:12:05.711643092Z" level=info msg="TearDown network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\" successfully" Apr 30 00:12:05.711753 containerd[1452]: time="2025-04-30T00:12:05.711675575Z" level=info msg="StopPodSandbox for \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\" returns successfully" Apr 30 00:12:05.712785 containerd[1452]: time="2025-04-30T00:12:05.712127747Z" level=info msg="StopPodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\"" Apr 30 00:12:05.713076 containerd[1452]: time="2025-04-30T00:12:05.712981325Z" level=info msg="TearDown network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" successfully" Apr 30 00:12:05.713076 containerd[1452]: time="2025-04-30T00:12:05.713001327Z" level=info msg="StopPodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" returns successfully" Apr 30 00:12:05.715598 containerd[1452]: time="2025-04-30T00:12:05.715563300Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" Apr 30 00:12:05.716415 containerd[1452]: time="2025-04-30T00:12:05.716299784Z" level=info msg="TearDown network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" successfully" Apr 30 00:12:05.716415 containerd[1452]: time="2025-04-30T00:12:05.716323707Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" returns successfully" Apr 30 00:12:05.718271 containerd[1452]: time="2025-04-30T00:12:05.717803916Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:05.718271 containerd[1452]: time="2025-04-30T00:12:05.717932771Z" level=info msg="TearDown network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" successfully" Apr 30 00:12:05.718271 containerd[1452]: time="2025-04-30T00:12:05.717944172Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" returns successfully" Apr 30 00:12:05.726904 containerd[1452]: time="2025-04-30T00:12:05.726866113Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:05.727208 kubelet[2531]: I0430 00:12:05.727098 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l625z" podStartSLOduration=1.917730055 podStartE2EDuration="15.727080097s" podCreationTimestamp="2025-04-30 00:11:50 +0000 UTC" firstStartedPulling="2025-04-30 00:11:51.117134232 +0000 UTC m=+15.925538509" lastFinishedPulling="2025-04-30 00:12:04.926484274 +0000 UTC m=+29.734888551" observedRunningTime="2025-04-30 00:12:05.725723622 +0000 UTC m=+30.534127859" watchObservedRunningTime="2025-04-30 00:12:05.727080097 +0000 UTC m=+30.535484374" Apr 30 00:12:05.727830 containerd[1452]: time="2025-04-30T00:12:05.727805860Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:05.728113 containerd[1452]: time="2025-04-30T00:12:05.728040887Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:05.730783 containerd[1452]: time="2025-04-30T00:12:05.730719594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:5,}" Apr 30 00:12:05.732642 kubelet[2531]: I0430 00:12:05.732470 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1" Apr 30 00:12:05.732987 containerd[1452]: time="2025-04-30T00:12:05.732955529Z" level=info msg="StopPodSandbox for \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\"" Apr 30 00:12:05.733524 containerd[1452]: time="2025-04-30T00:12:05.733117508Z" level=info msg="Ensure that sandbox 1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1 in task-service has been cleanup successfully" Apr 30 00:12:05.733524 containerd[1452]: time="2025-04-30T00:12:05.733550677Z" level=info msg="TearDown network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\" successfully" Apr 30 00:12:05.733524 containerd[1452]: time="2025-04-30T00:12:05.733570160Z" level=info msg="StopPodSandbox for \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\" returns successfully" Apr 30 00:12:05.735013 containerd[1452]: time="2025-04-30T00:12:05.734985842Z" level=info msg="StopPodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\"" Apr 30 00:12:05.735491 containerd[1452]: time="2025-04-30T00:12:05.735426372Z" level=info msg="TearDown network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" successfully" Apr 30 00:12:05.735939 containerd[1452]: time="2025-04-30T00:12:05.735815456Z" level=info msg="StopPodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" returns successfully" Apr 30 00:12:05.738042 containerd[1452]: time="2025-04-30T00:12:05.737708633Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" Apr 30 00:12:05.738042 containerd[1452]: time="2025-04-30T00:12:05.737798683Z" level=info msg="TearDown network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" successfully" Apr 30 00:12:05.738042 containerd[1452]: time="2025-04-30T00:12:05.737808084Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" returns successfully" Apr 30 00:12:05.738478 containerd[1452]: time="2025-04-30T00:12:05.738447438Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:05.738935 containerd[1452]: time="2025-04-30T00:12:05.738903530Z" level=info msg="TearDown network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" successfully" Apr 30 00:12:05.738935 containerd[1452]: time="2025-04-30T00:12:05.738929413Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" returns successfully" Apr 30 00:12:05.741020 containerd[1452]: time="2025-04-30T00:12:05.740995449Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:05.741430 containerd[1452]: time="2025-04-30T00:12:05.741398895Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:05.741910 containerd[1452]: time="2025-04-30T00:12:05.741835905Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:05.745041 kubelet[2531]: I0430 00:12:05.745012 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d" Apr 30 00:12:05.745854 containerd[1452]: time="2025-04-30T00:12:05.745791437Z" level=info msg="StopPodSandbox for \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\"" Apr 30 00:12:05.746065 containerd[1452]: time="2025-04-30T00:12:05.746041186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:5,}" Apr 30 00:12:05.746876 containerd[1452]: time="2025-04-30T00:12:05.746850919Z" level=info msg="Ensure that sandbox d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d in task-service has been cleanup successfully" Apr 30 00:12:05.747395 containerd[1452]: time="2025-04-30T00:12:05.747275647Z" level=info msg="TearDown network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\" successfully" Apr 30 00:12:05.747395 containerd[1452]: time="2025-04-30T00:12:05.747320172Z" level=info msg="StopPodSandbox for \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\" returns successfully" Apr 30 00:12:05.747668 containerd[1452]: time="2025-04-30T00:12:05.747614126Z" level=info msg="StopPodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\"" Apr 30 00:12:05.748056 containerd[1452]: time="2025-04-30T00:12:05.747730299Z" level=info msg="TearDown network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" successfully" Apr 30 00:12:05.748056 containerd[1452]: time="2025-04-30T00:12:05.747747821Z" level=info msg="StopPodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" returns successfully" Apr 30 00:12:05.748572 containerd[1452]: time="2025-04-30T00:12:05.748537152Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" Apr 30 00:12:05.748873 containerd[1452]: time="2025-04-30T00:12:05.748855228Z" level=info msg="TearDown network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" successfully" Apr 30 00:12:05.749446 containerd[1452]: time="2025-04-30T00:12:05.749396850Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" returns successfully" Apr 30 00:12:05.750164 containerd[1452]: time="2025-04-30T00:12:05.750038683Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:05.750164 containerd[1452]: time="2025-04-30T00:12:05.750121893Z" level=info msg="TearDown network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" successfully" Apr 30 00:12:05.750164 containerd[1452]: time="2025-04-30T00:12:05.750132054Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" returns successfully" Apr 30 00:12:05.751292 containerd[1452]: time="2025-04-30T00:12:05.750795250Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:05.751292 containerd[1452]: time="2025-04-30T00:12:05.750881060Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:05.751292 containerd[1452]: time="2025-04-30T00:12:05.750890821Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:05.751425 containerd[1452]: time="2025-04-30T00:12:05.751384237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:5,}" Apr 30 00:12:05.831158 systemd[1]: run-netns-cni\x2d048697f4\x2d0ad8\x2d17e2\x2d63ab\x2d543f94be5ea9.mount: Deactivated successfully. Apr 30 00:12:05.831264 systemd[1]: run-netns-cni\x2db1abb93b\x2d1e12\x2d9419\x2df65e\x2d6622f105bec3.mount: Deactivated successfully. Apr 30 00:12:05.831355 systemd[1]: run-netns-cni\x2d194c0cd8\x2d8810\x2d3632\x2d1e70\x2d7e8587b2b638.mount: Deactivated successfully. Apr 30 00:12:05.831429 systemd[1]: run-netns-cni\x2df116deb3\x2d7611\x2d850d\x2d1748\x2dd8f2c5dce1cc.mount: Deactivated successfully. Apr 30 00:12:05.831484 systemd[1]: run-netns-cni\x2d1b6b6622\x2da8c8\x2d9b3f\x2d9e39\x2d2ac2ff35aee3.mount: Deactivated successfully. Apr 30 00:12:05.831708 systemd[1]: run-netns-cni\x2de1be6218\x2d3967\x2d01b7\x2d0174\x2d540577c20c66.mount: Deactivated successfully. Apr 30 00:12:06.255315 systemd-networkd[1376]: cali126de82a31c: Link UP Apr 30 00:12:06.255534 systemd-networkd[1376]: cali126de82a31c: Gained carrier Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:05.924 [INFO][4509] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:05.942 [INFO][4509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0 calico-kube-controllers-bd4f6bb7c- calico-system ec608e6a-3467-4db3-99e7-9ccb6cf924be 737 0 2025-04-30 00:11:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bd4f6bb7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bd4f6bb7c-nl4l2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali126de82a31c [] []}} ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:05.942 [INFO][4509] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.097 [INFO][4594] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" HandleID="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Workload="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.211 [INFO][4594] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" HandleID="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Workload="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4a70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bd4f6bb7c-nl4l2", "timestamp":"2025-04-30 00:12:06.097242873 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.211 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.212 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.212 [INFO][4594] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.215 [INFO][4594] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.224 [INFO][4594] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.231 [INFO][4594] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.233 [INFO][4594] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.235 [INFO][4594] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.235 [INFO][4594] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.236 [INFO][4594] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0 Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.241 [INFO][4594] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.246 [INFO][4594] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.246 [INFO][4594] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" host="localhost" Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.246 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:12:06.266783 containerd[1452]: 2025-04-30 00:12:06.246 [INFO][4594] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" HandleID="k8s-pod-network.35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Workload="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.267413 containerd[1452]: 2025-04-30 00:12:06.249 [INFO][4509] cni-plugin/k8s.go 386: Populated endpoint ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0", GenerateName:"calico-kube-controllers-bd4f6bb7c-", Namespace:"calico-system", SelfLink:"", UID:"ec608e6a-3467-4db3-99e7-9ccb6cf924be", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd4f6bb7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bd4f6bb7c-nl4l2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali126de82a31c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.267413 containerd[1452]: 2025-04-30 00:12:06.249 [INFO][4509] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.267413 containerd[1452]: 2025-04-30 00:12:06.249 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali126de82a31c ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.267413 containerd[1452]: 2025-04-30 00:12:06.255 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.267413 containerd[1452]: 2025-04-30 00:12:06.255 [INFO][4509] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0", GenerateName:"calico-kube-controllers-bd4f6bb7c-", Namespace:"calico-system", SelfLink:"", UID:"ec608e6a-3467-4db3-99e7-9ccb6cf924be", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd4f6bb7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0", Pod:"calico-kube-controllers-bd4f6bb7c-nl4l2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali126de82a31c", MAC:"96:2e:ba:99:4e:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.267413 containerd[1452]: 2025-04-30 00:12:06.263 [INFO][4509] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0" Namespace="calico-system" Pod="calico-kube-controllers-bd4f6bb7c-nl4l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bd4f6bb7c--nl4l2-eth0" Apr 30 00:12:06.293438 containerd[1452]: time="2025-04-30T00:12:06.293326539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:12:06.293438 containerd[1452]: time="2025-04-30T00:12:06.293405067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:12:06.293438 containerd[1452]: time="2025-04-30T00:12:06.293417148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.293709 containerd[1452]: time="2025-04-30T00:12:06.293493877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.310863 systemd[1]: Started cri-containerd-35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0.scope - libcontainer container 35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0. Apr 30 00:12:06.323183 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:12:06.346080 containerd[1452]: time="2025-04-30T00:12:06.346024429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd4f6bb7c-nl4l2,Uid:ec608e6a-3467-4db3-99e7-9ccb6cf924be,Namespace:calico-system,Attempt:5,} returns sandbox id \"35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0\"" Apr 30 00:12:06.348481 containerd[1452]: time="2025-04-30T00:12:06.348254308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:12:06.354806 systemd-networkd[1376]: cali70a3286a56e: Link UP Apr 30 00:12:06.355017 systemd-networkd[1376]: cali70a3286a56e: Gained carrier Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:05.948 [INFO][4537] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:05.980 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0 calico-apiserver-bd89fb76- calico-apiserver d7f443d6-1623-481d-ac5b-c37dd6fd2c49 739 0 2025-04-30 00:11:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd89fb76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bd89fb76-dn8cn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70a3286a56e [] []}} ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:05.980 [INFO][4537] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.119 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" HandleID="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Workload="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" HandleID="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Workload="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000302d80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bd89fb76-dn8cn", "timestamp":"2025-04-30 00:12:06.119589029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.246 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.246 [INFO][4613] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.315 [INFO][4613] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.319 [INFO][4613] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.330 [INFO][4613] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.332 [INFO][4613] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.335 [INFO][4613] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.335 [INFO][4613] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.337 [INFO][4613] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287 Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.341 [INFO][4613] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.348 [INFO][4613] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.348 [INFO][4613] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" host="localhost" Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.348 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:12:06.366465 containerd[1452]: 2025-04-30 00:12:06.348 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" HandleID="k8s-pod-network.3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Workload="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.367095 containerd[1452]: 2025-04-30 00:12:06.352 [INFO][4537] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0", GenerateName:"calico-apiserver-bd89fb76-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7f443d6-1623-481d-ac5b-c37dd6fd2c49", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd89fb76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bd89fb76-dn8cn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70a3286a56e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.367095 containerd[1452]: 2025-04-30 00:12:06.352 [INFO][4537] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.367095 containerd[1452]: 2025-04-30 00:12:06.352 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70a3286a56e ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.367095 containerd[1452]: 2025-04-30 00:12:06.354 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.367095 containerd[1452]: 2025-04-30 00:12:06.355 [INFO][4537] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0", GenerateName:"calico-apiserver-bd89fb76-", Namespace:"calico-apiserver", SelfLink:"", UID:"d7f443d6-1623-481d-ac5b-c37dd6fd2c49", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd89fb76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287", Pod:"calico-apiserver-bd89fb76-dn8cn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70a3286a56e", MAC:"2e:43:fd:d4:51:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.367095 containerd[1452]: 2025-04-30 00:12:06.364 [INFO][4537] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-dn8cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--dn8cn-eth0" Apr 30 00:12:06.402753 containerd[1452]: time="2025-04-30T00:12:06.401878619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:12:06.402753 containerd[1452]: time="2025-04-30T00:12:06.401961267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:12:06.402753 containerd[1452]: time="2025-04-30T00:12:06.401976349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.402753 containerd[1452]: time="2025-04-30T00:12:06.402073399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.421916 systemd[1]: Started cri-containerd-3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287.scope - libcontainer container 3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287. Apr 30 00:12:06.435430 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:12:06.453608 systemd-networkd[1376]: cali8daef97ce9e: Link UP Apr 30 00:12:06.453999 systemd-networkd[1376]: cali8daef97ce9e: Gained carrier Apr 30 00:12:06.460503 containerd[1452]: time="2025-04-30T00:12:06.460456780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-dn8cn,Uid:d7f443d6-1623-481d-ac5b-c37dd6fd2c49,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287\"" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:05.894 [INFO][4493] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:05.924 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0 coredns-6f6b679f8f- kube-system 18e58184-40ff-4d1f-8487-cc0971208414 738 0 2025-04-30 00:11:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-fmpt9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8daef97ce9e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:05.924 [INFO][4493] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.115 [INFO][4587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" HandleID="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Workload="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" HandleID="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Workload="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-fmpt9", "timestamp":"2025-04-30 00:12:06.115409501 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.348 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.348 [INFO][4587] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.418 [INFO][4587] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.423 [INFO][4587] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.431 [INFO][4587] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.434 [INFO][4587] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.436 [INFO][4587] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.436 [INFO][4587] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.438 [INFO][4587] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4 Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.441 [INFO][4587] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.447 [INFO][4587] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.448 [INFO][4587] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" host="localhost" Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.448 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:12:06.468821 containerd[1452]: 2025-04-30 00:12:06.448 [INFO][4587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" HandleID="k8s-pod-network.bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Workload="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.469341 containerd[1452]: 2025-04-30 00:12:06.451 [INFO][4493] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"18e58184-40ff-4d1f-8487-cc0971208414", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-fmpt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8daef97ce9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.469341 containerd[1452]: 2025-04-30 00:12:06.451 [INFO][4493] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.469341 containerd[1452]: 2025-04-30 00:12:06.451 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8daef97ce9e ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.469341 containerd[1452]: 2025-04-30 00:12:06.454 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.469341 containerd[1452]: 2025-04-30 00:12:06.454 [INFO][4493] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"18e58184-40ff-4d1f-8487-cc0971208414", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4", Pod:"coredns-6f6b679f8f-fmpt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8daef97ce9e", MAC:"7e:51:47:89:25:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.469341 containerd[1452]: 2025-04-30 00:12:06.463 [INFO][4493] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4" Namespace="kube-system" Pod="coredns-6f6b679f8f-fmpt9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--fmpt9-eth0" Apr 30 00:12:06.487400 containerd[1452]: time="2025-04-30T00:12:06.487035470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:12:06.487400 containerd[1452]: time="2025-04-30T00:12:06.487129000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:12:06.487400 containerd[1452]: time="2025-04-30T00:12:06.487145362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.487629 containerd[1452]: time="2025-04-30T00:12:06.487559366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.508871 systemd[1]: Started cri-containerd-bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4.scope - libcontainer container bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4. Apr 30 00:12:06.523759 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:12:06.558427 systemd-networkd[1376]: cali39b6e15f6c1: Link UP Apr 30 00:12:06.558604 systemd-networkd[1376]: cali39b6e15f6c1: Gained carrier Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:05.950 [INFO][4539] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:05.983 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0 calico-apiserver-bd89fb76- calico-apiserver 045c28bd-5788-44ed-ad00-a7c01b791cf2 740 0 2025-04-30 00:11:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd89fb76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bd89fb76-spgc5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39b6e15f6c1 [] []}} ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:05.983 [INFO][4539] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.116 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" HandleID="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Workload="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" HandleID="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Workload="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019c130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bd89fb76-spgc5", "timestamp":"2025-04-30 00:12:06.1163382 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.448 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.449 [INFO][4611] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.517 [INFO][4611] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.523 [INFO][4611] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.532 [INFO][4611] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.534 [INFO][4611] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.538 [INFO][4611] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.538 [INFO][4611] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.540 [INFO][4611] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.546 [INFO][4611] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.553 [INFO][4611] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.553 [INFO][4611] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" host="localhost" Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.553 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:12:06.568728 containerd[1452]: 2025-04-30 00:12:06.553 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" HandleID="k8s-pod-network.fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Workload="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.569406 containerd[1452]: 2025-04-30 00:12:06.556 [INFO][4539] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0", GenerateName:"calico-apiserver-bd89fb76-", Namespace:"calico-apiserver", SelfLink:"", UID:"045c28bd-5788-44ed-ad00-a7c01b791cf2", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd89fb76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bd89fb76-spgc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39b6e15f6c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.569406 containerd[1452]: 2025-04-30 00:12:06.557 [INFO][4539] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.569406 containerd[1452]: 2025-04-30 00:12:06.557 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39b6e15f6c1 ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.569406 containerd[1452]: 2025-04-30 00:12:06.558 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.569406 containerd[1452]: 2025-04-30 00:12:06.558 [INFO][4539] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0", GenerateName:"calico-apiserver-bd89fb76-", Namespace:"calico-apiserver", SelfLink:"", UID:"045c28bd-5788-44ed-ad00-a7c01b791cf2", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd89fb76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f", Pod:"calico-apiserver-bd89fb76-spgc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39b6e15f6c1", MAC:"62:fb:f2:1a:fc:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.569406 containerd[1452]: 2025-04-30 00:12:06.566 [INFO][4539] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f" Namespace="calico-apiserver" Pod="calico-apiserver-bd89fb76-spgc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd89fb76--spgc5-eth0" Apr 30 00:12:06.573650 containerd[1452]: time="2025-04-30T00:12:06.573616234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fmpt9,Uid:18e58184-40ff-4d1f-8487-cc0971208414,Namespace:kube-system,Attempt:5,} returns sandbox id \"bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4\"" Apr 30 00:12:06.574905 kubelet[2531]: E0430 00:12:06.574883 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:06.576386 containerd[1452]: time="2025-04-30T00:12:06.576348727Z" level=info msg="CreateContainer within sandbox \"bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:12:06.594157 containerd[1452]: time="2025-04-30T00:12:06.593283583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:12:06.594157 containerd[1452]: time="2025-04-30T00:12:06.593896369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:12:06.594157 containerd[1452]: time="2025-04-30T00:12:06.593910210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.594157 containerd[1452]: time="2025-04-30T00:12:06.594011941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.595819 containerd[1452]: time="2025-04-30T00:12:06.595780411Z" level=info msg="CreateContainer within sandbox \"bec045cf39eb01f689f33837ff86c8165ea6c56015f58e0b532bd117412cf3d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f6db72f91b7385e205001150de279dbb32e0f1e85f6764bd9a4f7e82e0c4803\"" Apr 30 00:12:06.596807 containerd[1452]: time="2025-04-30T00:12:06.596776397Z" level=info msg="StartContainer for \"8f6db72f91b7385e205001150de279dbb32e0f1e85f6764bd9a4f7e82e0c4803\"" Apr 30 00:12:06.613884 systemd[1]: Started cri-containerd-fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f.scope - libcontainer container fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f. Apr 30 00:12:06.624371 systemd[1]: Started cri-containerd-8f6db72f91b7385e205001150de279dbb32e0f1e85f6764bd9a4f7e82e0c4803.scope - libcontainer container 8f6db72f91b7385e205001150de279dbb32e0f1e85f6764bd9a4f7e82e0c4803. Apr 30 00:12:06.637861 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:12:06.668389 systemd-networkd[1376]: calibe7a151b01d: Link UP Apr 30 00:12:06.669908 systemd-networkd[1376]: calibe7a151b01d: Gained carrier Apr 30 00:12:06.674412 containerd[1452]: time="2025-04-30T00:12:06.674358157Z" level=info msg="StartContainer for \"8f6db72f91b7385e205001150de279dbb32e0f1e85f6764bd9a4f7e82e0c4803\" returns successfully" Apr 30 00:12:06.674734 containerd[1452]: time="2025-04-30T00:12:06.674506412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd89fb76-spgc5,Uid:045c28bd-5788-44ed-ad00-a7c01b791cf2,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f\"" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:05.943 [INFO][4522] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:05.975 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h9r9d-eth0 csi-node-driver- calico-system c26bbdc1-7849-41b7-afe2-12b01cd7b775 650 0 2025-04-30 00:11:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h9r9d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibe7a151b01d [] []}} ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:05.975 [INFO][4522] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.097 [INFO][4603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" HandleID="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Workload="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.213 [INFO][4603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" HandleID="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Workload="localhost-k8s-csi--node--driver--h9r9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000696360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h9r9d", "timestamp":"2025-04-30 00:12:06.097372526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.214 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.554 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.554 [INFO][4603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.619 [INFO][4603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.632 [INFO][4603] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.637 [INFO][4603] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.640 [INFO][4603] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.642 [INFO][4603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.642 [INFO][4603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.644 [INFO][4603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2 Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.651 [INFO][4603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.659 [INFO][4603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.660 [INFO][4603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" host="localhost" Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.660 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:12:06.685358 containerd[1452]: 2025-04-30 00:12:06.660 [INFO][4603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" HandleID="k8s-pod-network.3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Workload="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.686055 containerd[1452]: 2025-04-30 00:12:06.664 [INFO][4522] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9r9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c26bbdc1-7849-41b7-afe2-12b01cd7b775", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h9r9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe7a151b01d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.686055 containerd[1452]: 2025-04-30 00:12:06.664 [INFO][4522] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.686055 containerd[1452]: 2025-04-30 00:12:06.664 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe7a151b01d ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.686055 containerd[1452]: 2025-04-30 00:12:06.670 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.686055 containerd[1452]: 2025-04-30 00:12:06.672 [INFO][4522] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h9r9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c26bbdc1-7849-41b7-afe2-12b01cd7b775", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2", Pod:"csi-node-driver-h9r9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe7a151b01d", MAC:"a2:3e:7e:bc:8c:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.686055 containerd[1452]: 2025-04-30 00:12:06.682 [INFO][4522] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2" Namespace="calico-system" Pod="csi-node-driver-h9r9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--h9r9d-eth0" Apr 30 00:12:06.717788 containerd[1452]: time="2025-04-30T00:12:06.717671201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:12:06.717788 containerd[1452]: time="2025-04-30T00:12:06.717789654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:12:06.717981 containerd[1452]: time="2025-04-30T00:12:06.717816377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.717981 containerd[1452]: time="2025-04-30T00:12:06.717948311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.739888 systemd[1]: Started cri-containerd-3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2.scope - libcontainer container 3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2. Apr 30 00:12:06.767726 kubelet[2531]: E0430 00:12:06.767609 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:06.778675 systemd-networkd[1376]: calid728b862020: Link UP Apr 30 00:12:06.779217 systemd-networkd[1376]: calid728b862020: Gained carrier Apr 30 00:12:06.786708 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:12:06.790991 kubelet[2531]: E0430 00:12:06.789925 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:06.798479 kubelet[2531]: I0430 00:12:06.798412 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fmpt9" podStartSLOduration=26.798384176 podStartE2EDuration="26.798384176s" podCreationTimestamp="2025-04-30 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:12:06.790339633 +0000 UTC m=+31.598743910" watchObservedRunningTime="2025-04-30 00:12:06.798384176 +0000 UTC m=+31.606788453" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:05.759 [INFO][4464] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:05.876 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--sczzd-eth0 coredns-6f6b679f8f- kube-system f10de97b-7bb8-4314-ad06-307db947bbd3 733 0 2025-04-30 00:11:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-sczzd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid728b862020 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:05.884 [INFO][4464] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.108 [INFO][4574] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" HandleID="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Workload="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.214 [INFO][4574] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" HandleID="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Workload="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036d1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-sczzd", "timestamp":"2025-04-30 00:12:06.108634734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.214 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.660 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.660 [INFO][4574] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.723 [INFO][4574] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.733 [INFO][4574] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.742 [INFO][4574] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.745 [INFO][4574] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.750 [INFO][4574] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.750 [INFO][4574] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.752 [INFO][4574] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764 Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.758 [INFO][4574] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.771 [INFO][4574] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.771 [INFO][4574] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" host="localhost" Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.771 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:12:06.801431 containerd[1452]: 2025-04-30 00:12:06.771 [INFO][4574] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" HandleID="k8s-pod-network.e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Workload="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.801982 containerd[1452]: 2025-04-30 00:12:06.775 [INFO][4464] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--sczzd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f10de97b-7bb8-4314-ad06-307db947bbd3", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-sczzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid728b862020", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.801982 containerd[1452]: 2025-04-30 00:12:06.775 [INFO][4464] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.801982 containerd[1452]: 2025-04-30 00:12:06.775 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid728b862020 ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.801982 containerd[1452]: 2025-04-30 00:12:06.779 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.801982 containerd[1452]: 2025-04-30 00:12:06.782 [INFO][4464] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--sczzd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f10de97b-7bb8-4314-ad06-307db947bbd3", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764", Pod:"coredns-6f6b679f8f-sczzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid728b862020", MAC:"52:21:40:be:82:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:12:06.801982 containerd[1452]: 2025-04-30 00:12:06.799 [INFO][4464] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764" Namespace="kube-system" Pod="coredns-6f6b679f8f-sczzd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sczzd-eth0" Apr 30 00:12:06.838879 containerd[1452]: time="2025-04-30T00:12:06.838831553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9r9d,Uid:c26bbdc1-7849-41b7-afe2-12b01cd7b775,Namespace:calico-system,Attempt:5,} returns sandbox id \"3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2\"" Apr 30 00:12:06.849787 containerd[1452]: time="2025-04-30T00:12:06.849666995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:12:06.849993 containerd[1452]: time="2025-04-30T00:12:06.849931103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:12:06.849993 containerd[1452]: time="2025-04-30T00:12:06.849962947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.850276 containerd[1452]: time="2025-04-30T00:12:06.850194171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:12:06.889920 systemd[1]: Started cri-containerd-e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764.scope - libcontainer container e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764. Apr 30 00:12:06.915650 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:12:06.937376 containerd[1452]: time="2025-04-30T00:12:06.937254987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sczzd,Uid:f10de97b-7bb8-4314-ad06-307db947bbd3,Namespace:kube-system,Attempt:5,} returns sandbox id \"e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764\"" Apr 30 00:12:06.938568 kubelet[2531]: E0430 00:12:06.938099 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:06.941558 containerd[1452]: time="2025-04-30T00:12:06.941515724Z" level=info msg="CreateContainer within sandbox \"e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:12:06.978368 containerd[1452]: time="2025-04-30T00:12:06.978305669Z" level=info msg="CreateContainer within sandbox \"e4d4e23ca2cd6f5e3a0665b0d45a88b2e46be52399505082fd2991db84159764\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"689af7a0956a786780ed359f31a09828508db234b7e66d27eae0a61249a510d5\"" Apr 30 00:12:06.980178 containerd[1452]: time="2025-04-30T00:12:06.979241169Z" level=info msg="StartContainer for \"689af7a0956a786780ed359f31a09828508db234b7e66d27eae0a61249a510d5\"" Apr 30 00:12:07.040074 systemd[1]: Started cri-containerd-689af7a0956a786780ed359f31a09828508db234b7e66d27eae0a61249a510d5.scope - libcontainer container 689af7a0956a786780ed359f31a09828508db234b7e66d27eae0a61249a510d5. Apr 30 00:12:07.094538 containerd[1452]: time="2025-04-30T00:12:07.094465983Z" level=info msg="StartContainer for \"689af7a0956a786780ed359f31a09828508db234b7e66d27eae0a61249a510d5\" returns successfully" Apr 30 00:12:07.792312 kubelet[2531]: E0430 00:12:07.792278 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:07.808924 kubelet[2531]: I0430 00:12:07.808783 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sczzd" podStartSLOduration=27.808767911 podStartE2EDuration="27.808767911s" podCreationTimestamp="2025-04-30 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:12:07.807491342 +0000 UTC m=+32.615895619" watchObservedRunningTime="2025-04-30 00:12:07.808767911 +0000 UTC m=+32.617172148" Apr 30 00:12:07.819508 kubelet[2531]: E0430 00:12:07.818828 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:07.921406 systemd-networkd[1376]: cali8daef97ce9e: Gained IPv6LL Apr 30 00:12:07.984990 systemd-networkd[1376]: calibe7a151b01d: Gained IPv6LL Apr 30 00:12:07.985256 systemd-networkd[1376]: cali39b6e15f6c1: Gained IPv6LL Apr 30 00:12:07.985396 systemd-networkd[1376]: cali126de82a31c: Gained IPv6LL Apr 30 00:12:08.015868 containerd[1452]: time="2025-04-30T00:12:08.015812834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:08.017164 containerd[1452]: time="2025-04-30T00:12:08.017115765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:12:08.018292 containerd[1452]: time="2025-04-30T00:12:08.018267286Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:08.020607 containerd[1452]: time="2025-04-30T00:12:08.020565086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:08.021285 containerd[1452]: time="2025-04-30T00:12:08.021240413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.672951782s" Apr 30 00:12:08.021285 containerd[1452]: time="2025-04-30T00:12:08.021277176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:12:08.022841 containerd[1452]: time="2025-04-30T00:12:08.022808043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:12:08.031184 containerd[1452]: time="2025-04-30T00:12:08.029791451Z" level=info msg="CreateContainer within sandbox \"35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:12:08.044352 containerd[1452]: time="2025-04-30T00:12:08.044148133Z" level=info msg="CreateContainer within sandbox \"35838b489c7ecb342698db7d79fb642be2227da9c5d96a03f45133f641af60c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"762ec7d55088159b2a9ea7a8077f227ba9106c5a2116f171b408b771575f5c5a\"" Apr 30 00:12:08.045905 containerd[1452]: time="2025-04-30T00:12:08.045797849Z" level=info msg="StartContainer for \"762ec7d55088159b2a9ea7a8077f227ba9106c5a2116f171b408b771575f5c5a\"" Apr 30 00:12:08.081092 systemd[1]: Started cri-containerd-762ec7d55088159b2a9ea7a8077f227ba9106c5a2116f171b408b771575f5c5a.scope - libcontainer container 762ec7d55088159b2a9ea7a8077f227ba9106c5a2116f171b408b771575f5c5a. Apr 30 00:12:08.117253 containerd[1452]: time="2025-04-30T00:12:08.117200475Z" level=info msg="StartContainer for \"762ec7d55088159b2a9ea7a8077f227ba9106c5a2116f171b408b771575f5c5a\" returns successfully" Apr 30 00:12:08.368863 systemd-networkd[1376]: cali70a3286a56e: Gained IPv6LL Apr 30 00:12:08.626604 systemd-networkd[1376]: calid728b862020: Gained IPv6LL Apr 30 00:12:08.823098 kubelet[2531]: E0430 00:12:08.823040 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:08.823475 kubelet[2531]: E0430 00:12:08.823144 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:09.688215 containerd[1452]: time="2025-04-30T00:12:09.688152940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:09.688991 containerd[1452]: time="2025-04-30T00:12:09.688937993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:12:09.689630 containerd[1452]: time="2025-04-30T00:12:09.689589837Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:09.691705 containerd[1452]: time="2025-04-30T00:12:09.691654417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:09.692793 containerd[1452]: time="2025-04-30T00:12:09.692758292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.669916847s" Apr 30 00:12:09.692851 containerd[1452]: time="2025-04-30T00:12:09.692794615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:12:09.694334 containerd[1452]: time="2025-04-30T00:12:09.694259754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:12:09.695667 containerd[1452]: time="2025-04-30T00:12:09.695537481Z" level=info msg="CreateContainer within sandbox \"3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:12:09.712020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390203695.mount: Deactivated successfully. Apr 30 00:12:09.720406 containerd[1452]: time="2025-04-30T00:12:09.720344845Z" level=info msg="CreateContainer within sandbox \"3c552ca1667c333e6d2de21d34210b202cde204e3b9f854fae2e7b731b167287\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"29f28a2263abad73d79d58ceeddcbd335610ae2b960d27245b94049b997e09fe\"" Apr 30 00:12:09.721296 containerd[1452]: time="2025-04-30T00:12:09.721236986Z" level=info msg="StartContainer for \"29f28a2263abad73d79d58ceeddcbd335610ae2b960d27245b94049b997e09fe\"" Apr 30 00:12:09.777867 systemd[1]: Started cri-containerd-29f28a2263abad73d79d58ceeddcbd335610ae2b960d27245b94049b997e09fe.scope - libcontainer container 29f28a2263abad73d79d58ceeddcbd335610ae2b960d27245b94049b997e09fe. Apr 30 00:12:09.818702 containerd[1452]: time="2025-04-30T00:12:09.818311218Z" level=info msg="StartContainer for \"29f28a2263abad73d79d58ceeddcbd335610ae2b960d27245b94049b997e09fe\" returns successfully" Apr 30 00:12:09.828958 kubelet[2531]: E0430 00:12:09.828868 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:09.828958 kubelet[2531]: E0430 00:12:09.828952 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:09.846299 kubelet[2531]: I0430 00:12:09.844656 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bd4f6bb7c-nl4l2" podStartSLOduration=18.170161589 podStartE2EDuration="19.844628925s" podCreationTimestamp="2025-04-30 00:11:50 +0000 UTC" firstStartedPulling="2025-04-30 00:12:06.348051567 +0000 UTC m=+31.156455844" lastFinishedPulling="2025-04-30 00:12:08.022518903 +0000 UTC m=+32.830923180" observedRunningTime="2025-04-30 00:12:08.837831805 +0000 UTC m=+33.646236082" watchObservedRunningTime="2025-04-30 00:12:09.844628925 +0000 UTC m=+34.653033202" Apr 30 00:12:09.921740 kubelet[2531]: I0430 00:12:09.921693 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:12:09.922172 kubelet[2531]: E0430 00:12:09.922016 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:09.926035 kubelet[2531]: I0430 00:12:09.925971 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bd89fb76-dn8cn" podStartSLOduration=16.695207746 podStartE2EDuration="19.925955527s" podCreationTimestamp="2025-04-30 00:11:50 +0000 UTC" firstStartedPulling="2025-04-30 00:12:06.463131027 +0000 UTC m=+31.271535304" lastFinishedPulling="2025-04-30 00:12:09.693878808 +0000 UTC m=+34.502283085" observedRunningTime="2025-04-30 00:12:09.846839595 +0000 UTC m=+34.655243872" watchObservedRunningTime="2025-04-30 00:12:09.925955527 +0000 UTC m=+34.734359804" Apr 30 00:12:10.019075 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:39668.service - OpenSSH per-connection server daemon (10.0.0.1:39668). Apr 30 00:12:10.074818 containerd[1452]: time="2025-04-30T00:12:10.074769492Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:10.075711 containerd[1452]: time="2025-04-30T00:12:10.075267325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:12:10.078045 containerd[1452]: time="2025-04-30T00:12:10.078014386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 383.71779ms" Apr 30 00:12:10.078181 containerd[1452]: time="2025-04-30T00:12:10.078047549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:12:10.079564 containerd[1452]: time="2025-04-30T00:12:10.079513965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:12:10.081917 containerd[1452]: time="2025-04-30T00:12:10.081797556Z" level=info msg="CreateContainer within sandbox \"fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:12:10.100754 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 39668 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:10.103374 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:10.105320 containerd[1452]: time="2025-04-30T00:12:10.105233824Z" level=info msg="CreateContainer within sandbox \"fb085b1f69ebbfc14317e68d01013a6a030394a48a57f811696e033d2141305f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6cbd5470c49176afbfded452bbeb0b1021bb875d6fb887889f2f463dedbd78b0\"" Apr 30 00:12:10.107274 containerd[1452]: time="2025-04-30T00:12:10.106004634Z" level=info msg="StartContainer for \"6cbd5470c49176afbfded452bbeb0b1021bb875d6fb887889f2f463dedbd78b0\"" Apr 30 00:12:10.112009 systemd-logind[1431]: New session 9 of user core. Apr 30 00:12:10.119987 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:12:10.142939 systemd[1]: Started cri-containerd-6cbd5470c49176afbfded452bbeb0b1021bb875d6fb887889f2f463dedbd78b0.scope - libcontainer container 6cbd5470c49176afbfded452bbeb0b1021bb875d6fb887889f2f463dedbd78b0. Apr 30 00:12:10.226509 containerd[1452]: time="2025-04-30T00:12:10.226058441Z" level=info msg="StartContainer for \"6cbd5470c49176afbfded452bbeb0b1021bb875d6fb887889f2f463dedbd78b0\" returns successfully" Apr 30 00:12:10.386378 sshd[5340]: Connection closed by 10.0.0.1 port 39668 Apr 30 00:12:10.389849 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:10.395083 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:39668.service: Deactivated successfully. Apr 30 00:12:10.396930 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:12:10.400016 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:12:10.401065 systemd-logind[1431]: Removed session 9. Apr 30 00:12:10.631713 kernel: bpftool[5439]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:12:10.818393 systemd-networkd[1376]: vxlan.calico: Link UP Apr 30 00:12:10.818400 systemd-networkd[1376]: vxlan.calico: Gained carrier Apr 30 00:12:10.843713 kubelet[2531]: I0430 00:12:10.843599 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:12:10.844135 kubelet[2531]: E0430 00:12:10.844046 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:10.867076 kubelet[2531]: I0430 00:12:10.866200 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bd89fb76-spgc5" podStartSLOduration=17.468116146 podStartE2EDuration="20.866179863s" podCreationTimestamp="2025-04-30 00:11:50 +0000 UTC" firstStartedPulling="2025-04-30 00:12:06.680746962 +0000 UTC m=+31.489151239" lastFinishedPulling="2025-04-30 00:12:10.078810639 +0000 UTC m=+34.887214956" observedRunningTime="2025-04-30 00:12:10.863443922 +0000 UTC m=+35.671848239" watchObservedRunningTime="2025-04-30 00:12:10.866179863 +0000 UTC m=+35.674584100" Apr 30 00:12:11.435699 containerd[1452]: time="2025-04-30T00:12:11.435189719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:11.436112 containerd[1452]: time="2025-04-30T00:12:11.435799718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:12:11.436715 containerd[1452]: time="2025-04-30T00:12:11.436688055Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:11.438848 containerd[1452]: time="2025-04-30T00:12:11.438797791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:11.439756 containerd[1452]: time="2025-04-30T00:12:11.439710409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.360165442s" Apr 30 00:12:11.439756 containerd[1452]: time="2025-04-30T00:12:11.439750652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:12:11.441831 containerd[1452]: time="2025-04-30T00:12:11.441797623Z" level=info msg="CreateContainer within sandbox \"3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:12:11.471334 containerd[1452]: time="2025-04-30T00:12:11.471286077Z" level=info msg="CreateContainer within sandbox \"3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"04212f9e0334554704c4d91982a0c08a64a860a70f5df52ca571ef2846412504\"" Apr 30 00:12:11.472737 containerd[1452]: time="2025-04-30T00:12:11.471787189Z" level=info msg="StartContainer for \"04212f9e0334554704c4d91982a0c08a64a860a70f5df52ca571ef2846412504\"" Apr 30 00:12:11.507889 systemd[1]: Started cri-containerd-04212f9e0334554704c4d91982a0c08a64a860a70f5df52ca571ef2846412504.scope - libcontainer container 04212f9e0334554704c4d91982a0c08a64a860a70f5df52ca571ef2846412504. Apr 30 00:12:11.534793 containerd[1452]: time="2025-04-30T00:12:11.534751031Z" level=info msg="StartContainer for \"04212f9e0334554704c4d91982a0c08a64a860a70f5df52ca571ef2846412504\" returns successfully" Apr 30 00:12:11.538871 containerd[1452]: time="2025-04-30T00:12:11.538538394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:12:11.824926 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Apr 30 00:12:11.848325 kubelet[2531]: I0430 00:12:11.848011 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:12:12.630525 containerd[1452]: time="2025-04-30T00:12:12.630471428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:12.631277 containerd[1452]: time="2025-04-30T00:12:12.631233555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:12:12.632215 containerd[1452]: time="2025-04-30T00:12:12.632184575Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:12.634248 containerd[1452]: time="2025-04-30T00:12:12.634215021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:12:12.635023 containerd[1452]: time="2025-04-30T00:12:12.634984990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.096407233s" Apr 30 00:12:12.635063 containerd[1452]: time="2025-04-30T00:12:12.635040553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:12:12.637522 containerd[1452]: time="2025-04-30T00:12:12.637491026Z" level=info msg="CreateContainer within sandbox \"3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:12:12.655835 containerd[1452]: time="2025-04-30T00:12:12.655784568Z" level=info msg="CreateContainer within sandbox \"3bb966b609f12ca766e16ed62665517aaefa1de4bdfc75bb54912274a74215c2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"506f2cd8b906609c4b67905d6585057f03ddd5ce5346d30c5e93cc775cae75b8\"" Apr 30 00:12:12.656795 containerd[1452]: time="2025-04-30T00:12:12.656759349Z" level=info msg="StartContainer for \"506f2cd8b906609c4b67905d6585057f03ddd5ce5346d30c5e93cc775cae75b8\"" Apr 30 00:12:12.691903 systemd[1]: Started cri-containerd-506f2cd8b906609c4b67905d6585057f03ddd5ce5346d30c5e93cc775cae75b8.scope - libcontainer container 506f2cd8b906609c4b67905d6585057f03ddd5ce5346d30c5e93cc775cae75b8. Apr 30 00:12:12.722613 containerd[1452]: time="2025-04-30T00:12:12.722566098Z" level=info msg="StartContainer for \"506f2cd8b906609c4b67905d6585057f03ddd5ce5346d30c5e93cc775cae75b8\" returns successfully" Apr 30 00:12:12.867200 kubelet[2531]: I0430 00:12:12.866161 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h9r9d" podStartSLOduration=17.071264693 podStartE2EDuration="22.866142383s" podCreationTimestamp="2025-04-30 00:11:50 +0000 UTC" firstStartedPulling="2025-04-30 00:12:06.841217249 +0000 UTC m=+31.649621526" lastFinishedPulling="2025-04-30 00:12:12.636094979 +0000 UTC m=+37.444499216" observedRunningTime="2025-04-30 00:12:12.865024673 +0000 UTC m=+37.673429030" watchObservedRunningTime="2025-04-30 00:12:12.866142383 +0000 UTC m=+37.674546660" Apr 30 00:12:13.362381 kubelet[2531]: I0430 00:12:13.362331 2531 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:12:13.363592 kubelet[2531]: I0430 00:12:13.363563 2531 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:12:15.404412 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:39282.service - OpenSSH per-connection server daemon (10.0.0.1:39282). Apr 30 00:12:15.496162 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 39282 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:15.497944 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:15.506421 systemd-logind[1431]: New session 10 of user core. Apr 30 00:12:15.516906 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:12:15.774738 sshd[5618]: Connection closed by 10.0.0.1 port 39282 Apr 30 00:12:15.775208 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:15.783935 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:39282.service: Deactivated successfully. Apr 30 00:12:15.786216 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:12:15.789054 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:12:15.800133 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:39290.service - OpenSSH per-connection server daemon (10.0.0.1:39290). Apr 30 00:12:15.802021 systemd-logind[1431]: Removed session 10. Apr 30 00:12:15.848148 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 39290 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:15.850305 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:15.855403 systemd-logind[1431]: New session 11 of user core. Apr 30 00:12:15.865885 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:12:16.111731 sshd[5633]: Connection closed by 10.0.0.1 port 39290 Apr 30 00:12:16.114079 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:16.122140 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:39290.service: Deactivated successfully. Apr 30 00:12:16.124495 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:12:16.135248 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:12:16.155203 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:39304.service - OpenSSH per-connection server daemon (10.0.0.1:39304). Apr 30 00:12:16.160171 systemd-logind[1431]: Removed session 11. Apr 30 00:12:16.225606 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 39304 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:16.227170 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:16.233215 systemd-logind[1431]: New session 12 of user core. Apr 30 00:12:16.242908 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:12:16.408165 sshd[5653]: Connection closed by 10.0.0.1 port 39304 Apr 30 00:12:16.409819 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:16.415270 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:39304.service: Deactivated successfully. Apr 30 00:12:16.419198 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:12:16.420282 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:12:16.421595 systemd-logind[1431]: Removed session 12. Apr 30 00:12:17.147918 kubelet[2531]: I0430 00:12:17.147811 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:12:21.426516 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:39308.service - OpenSSH per-connection server daemon (10.0.0.1:39308). Apr 30 00:12:21.501343 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 39308 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:21.504944 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:21.512447 systemd-logind[1431]: New session 13 of user core. Apr 30 00:12:21.523908 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:12:21.785587 sshd[5682]: Connection closed by 10.0.0.1 port 39308 Apr 30 00:12:21.785975 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:21.791890 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:39308.service: Deactivated successfully. Apr 30 00:12:21.795154 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:12:21.796113 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:12:21.797122 systemd-logind[1431]: Removed session 13. Apr 30 00:12:25.105708 kubelet[2531]: E0430 00:12:25.105488 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:12:26.798148 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:51956.service - OpenSSH per-connection server daemon (10.0.0.1:51956). Apr 30 00:12:26.849019 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 51956 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:26.850472 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:26.856707 systemd-logind[1431]: New session 14 of user core. Apr 30 00:12:26.867857 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:12:27.034981 sshd[5718]: Connection closed by 10.0.0.1 port 51956 Apr 30 00:12:27.035600 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:27.045823 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:51956.service: Deactivated successfully. Apr 30 00:12:27.048079 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:12:27.052262 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:12:27.071190 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). Apr 30 00:12:27.072654 systemd-logind[1431]: Removed session 14. Apr 30 00:12:27.117526 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:27.119279 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:27.124543 systemd-logind[1431]: New session 15 of user core. Apr 30 00:12:27.136973 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:12:27.406349 sshd[5732]: Connection closed by 10.0.0.1 port 51968 Apr 30 00:12:27.407456 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:27.417593 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:51968.service: Deactivated successfully. Apr 30 00:12:27.419528 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:12:27.420997 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:12:27.428303 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). Apr 30 00:12:27.429403 systemd-logind[1431]: Removed session 15. Apr 30 00:12:27.506799 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:27.508232 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:27.512788 systemd-logind[1431]: New session 16 of user core. Apr 30 00:12:27.526915 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:12:29.066795 sshd[5745]: Connection closed by 10.0.0.1 port 51972 Apr 30 00:12:29.068202 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:29.079429 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:51972.service: Deactivated successfully. Apr 30 00:12:29.082035 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:12:29.084372 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:12:29.094181 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Apr 30 00:12:29.098881 systemd-logind[1431]: Removed session 16. Apr 30 00:12:29.140342 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:29.142255 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:29.147153 systemd-logind[1431]: New session 17 of user core. Apr 30 00:12:29.159892 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:12:29.560533 sshd[5765]: Connection closed by 10.0.0.1 port 51984 Apr 30 00:12:29.562905 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:29.574861 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:51984.service: Deactivated successfully. Apr 30 00:12:29.580050 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:12:29.585780 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:12:29.596209 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:51990.service - OpenSSH per-connection server daemon (10.0.0.1:51990). Apr 30 00:12:29.597608 systemd-logind[1431]: Removed session 17. Apr 30 00:12:29.645121 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 51990 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:29.646595 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:29.651080 systemd-logind[1431]: New session 18 of user core. Apr 30 00:12:29.667038 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:12:29.801792 sshd[5777]: Connection closed by 10.0.0.1 port 51990 Apr 30 00:12:29.802389 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:29.806094 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:51990.service: Deactivated successfully. Apr 30 00:12:29.809450 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:12:29.811308 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:12:29.812421 systemd-logind[1431]: Removed session 18. Apr 30 00:12:34.816949 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:38712.service - OpenSSH per-connection server daemon (10.0.0.1:38712). Apr 30 00:12:34.861079 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 38712 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:34.862426 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:34.869903 systemd-logind[1431]: New session 19 of user core. Apr 30 00:12:34.876992 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:12:35.005820 sshd[5822]: Connection closed by 10.0.0.1 port 38712 Apr 30 00:12:35.006249 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:35.009184 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:38712.service: Deactivated successfully. Apr 30 00:12:35.010750 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:12:35.014018 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:12:35.015575 systemd-logind[1431]: Removed session 19. Apr 30 00:12:35.266249 containerd[1452]: time="2025-04-30T00:12:35.266061577Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:35.266249 containerd[1452]: time="2025-04-30T00:12:35.266173181Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:35.266249 containerd[1452]: time="2025-04-30T00:12:35.266184101Z" level=info msg="StopPodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:35.266817 containerd[1452]: time="2025-04-30T00:12:35.266565674Z" level=info msg="RemovePodSandbox for \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:35.268326 containerd[1452]: time="2025-04-30T00:12:35.268275452Z" level=info msg="Forcibly stopping sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\"" Apr 30 00:12:35.268398 containerd[1452]: time="2025-04-30T00:12:35.268382056Z" level=info msg="TearDown network for sandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" successfully" Apr 30 00:12:35.286579 containerd[1452]: time="2025-04-30T00:12:35.286532195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.287283 containerd[1452]: time="2025-04-30T00:12:35.286671000Z" level=info msg="RemovePodSandbox \"fe61f54e4ba4cb144bfca4ce2e5e194a4f23c6516f2ba55526a5717c592678d8\" returns successfully" Apr 30 00:12:35.287508 containerd[1452]: time="2025-04-30T00:12:35.287479907Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:35.287719 containerd[1452]: time="2025-04-30T00:12:35.287702075Z" level=info msg="TearDown network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" successfully" Apr 30 00:12:35.287804 containerd[1452]: time="2025-04-30T00:12:35.287789998Z" level=info msg="StopPodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" returns successfully" Apr 30 00:12:35.288862 containerd[1452]: time="2025-04-30T00:12:35.288745271Z" level=info msg="RemovePodSandbox for \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:35.288926 containerd[1452]: time="2025-04-30T00:12:35.288866395Z" level=info msg="Forcibly stopping sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\"" Apr 30 00:12:35.289076 containerd[1452]: time="2025-04-30T00:12:35.289054121Z" level=info msg="TearDown network for sandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" successfully" Apr 30 00:12:35.292425 containerd[1452]: time="2025-04-30T00:12:35.292386075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.292502 containerd[1452]: time="2025-04-30T00:12:35.292450557Z" level=info msg="RemovePodSandbox \"4aa782976f7eb8fa1d9c5ee51848e93c5b32566f0bba0ec58c32f81f4d537098\" returns successfully" Apr 30 00:12:35.293162 containerd[1452]: time="2025-04-30T00:12:35.292856291Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" Apr 30 00:12:35.293162 containerd[1452]: time="2025-04-30T00:12:35.292947694Z" level=info msg="TearDown network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" successfully" Apr 30 00:12:35.293162 containerd[1452]: time="2025-04-30T00:12:35.292957774Z" level=info msg="StopPodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" returns successfully" Apr 30 00:12:35.295086 containerd[1452]: time="2025-04-30T00:12:35.293857565Z" level=info msg="RemovePodSandbox for \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" Apr 30 00:12:35.295086 containerd[1452]: time="2025-04-30T00:12:35.293885766Z" level=info msg="Forcibly stopping sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\"" Apr 30 00:12:35.295086 containerd[1452]: time="2025-04-30T00:12:35.293943808Z" level=info msg="TearDown network for sandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" successfully" Apr 30 00:12:35.297478 containerd[1452]: time="2025-04-30T00:12:35.297437927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.297615 containerd[1452]: time="2025-04-30T00:12:35.297598012Z" level=info msg="RemovePodSandbox \"2f1f1368ca4f668976779a37d52f02211774e74c80245278be69237adc11c5ef\" returns successfully" Apr 30 00:12:35.298333 containerd[1452]: time="2025-04-30T00:12:35.298307437Z" level=info msg="StopPodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\"" Apr 30 00:12:35.298429 containerd[1452]: time="2025-04-30T00:12:35.298413920Z" level=info msg="TearDown network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" successfully" Apr 30 00:12:35.298462 containerd[1452]: time="2025-04-30T00:12:35.298427801Z" level=info msg="StopPodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" returns successfully" Apr 30 00:12:35.298761 containerd[1452]: time="2025-04-30T00:12:35.298726691Z" level=info msg="RemovePodSandbox for \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\"" Apr 30 00:12:35.298761 containerd[1452]: time="2025-04-30T00:12:35.298759772Z" level=info msg="Forcibly stopping sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\"" Apr 30 00:12:35.298851 containerd[1452]: time="2025-04-30T00:12:35.298834655Z" level=info msg="TearDown network for sandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" successfully" Apr 30 00:12:35.301603 containerd[1452]: time="2025-04-30T00:12:35.301519306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.301603 containerd[1452]: time="2025-04-30T00:12:35.301574628Z" level=info msg="RemovePodSandbox \"f2198508b088647f41a1552159933584b1f5c0165135a7c21823f7fe23df951b\" returns successfully" Apr 30 00:12:35.302001 containerd[1452]: time="2025-04-30T00:12:35.301922160Z" level=info msg="StopPodSandbox for \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\"" Apr 30 00:12:35.302046 containerd[1452]: time="2025-04-30T00:12:35.302004723Z" level=info msg="TearDown network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\" successfully" Apr 30 00:12:35.302046 containerd[1452]: time="2025-04-30T00:12:35.302014323Z" level=info msg="StopPodSandbox for \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\" returns successfully" Apr 30 00:12:35.302860 containerd[1452]: time="2025-04-30T00:12:35.302253731Z" level=info msg="RemovePodSandbox for \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\"" Apr 30 00:12:35.302860 containerd[1452]: time="2025-04-30T00:12:35.302280052Z" level=info msg="Forcibly stopping sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\"" Apr 30 00:12:35.302860 containerd[1452]: time="2025-04-30T00:12:35.302339054Z" level=info msg="TearDown network for sandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\" successfully" Apr 30 00:12:35.304869 containerd[1452]: time="2025-04-30T00:12:35.304832179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.304920 containerd[1452]: time="2025-04-30T00:12:35.304881261Z" level=info msg="RemovePodSandbox \"1638b6da42ffd67ef504eaab65191b8742569f47249b51f9d41865970f7164a1\" returns successfully" Apr 30 00:12:35.305343 containerd[1452]: time="2025-04-30T00:12:35.305185431Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:35.305343 containerd[1452]: time="2025-04-30T00:12:35.305262954Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:35.305343 containerd[1452]: time="2025-04-30T00:12:35.305273274Z" level=info msg="StopPodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:35.306379 containerd[1452]: time="2025-04-30T00:12:35.305692008Z" level=info msg="RemovePodSandbox for \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:35.306379 containerd[1452]: time="2025-04-30T00:12:35.305717169Z" level=info msg="Forcibly stopping sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\"" Apr 30 00:12:35.306379 containerd[1452]: time="2025-04-30T00:12:35.305784732Z" level=info msg="TearDown network for sandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" successfully" Apr 30 00:12:35.310745 containerd[1452]: time="2025-04-30T00:12:35.310713900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.310862 containerd[1452]: time="2025-04-30T00:12:35.310845504Z" level=info msg="RemovePodSandbox \"29bbaca8f54d0911378e3a83bd54535f2e977cc207050eb2422406635ac66727\" returns successfully" Apr 30 00:12:35.311617 containerd[1452]: time="2025-04-30T00:12:35.311433404Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:35.311617 containerd[1452]: time="2025-04-30T00:12:35.311514327Z" level=info msg="TearDown network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" successfully" Apr 30 00:12:35.311617 containerd[1452]: time="2025-04-30T00:12:35.311523727Z" level=info msg="StopPodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" returns successfully" Apr 30 00:12:35.313208 containerd[1452]: time="2025-04-30T00:12:35.312046705Z" level=info msg="RemovePodSandbox for \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:35.313208 containerd[1452]: time="2025-04-30T00:12:35.312074666Z" level=info msg="Forcibly stopping sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\"" Apr 30 00:12:35.313208 containerd[1452]: time="2025-04-30T00:12:35.312137388Z" level=info msg="TearDown network for sandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" successfully" Apr 30 00:12:35.315957 containerd[1452]: time="2025-04-30T00:12:35.315041127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.315957 containerd[1452]: time="2025-04-30T00:12:35.315924237Z" level=info msg="RemovePodSandbox \"a8a484902c1700c154f29682bc9113d47660fdb7d0bc40a0988ecce442498fb5\" returns successfully" Apr 30 00:12:35.317109 containerd[1452]: time="2025-04-30T00:12:35.316778747Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" Apr 30 00:12:35.317109 containerd[1452]: time="2025-04-30T00:12:35.316867110Z" level=info msg="TearDown network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" successfully" Apr 30 00:12:35.317109 containerd[1452]: time="2025-04-30T00:12:35.316876710Z" level=info msg="StopPodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" returns successfully" Apr 30 00:12:35.317219 containerd[1452]: time="2025-04-30T00:12:35.317170840Z" level=info msg="RemovePodSandbox for \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" Apr 30 00:12:35.317219 containerd[1452]: time="2025-04-30T00:12:35.317198841Z" level=info msg="Forcibly stopping sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\"" Apr 30 00:12:35.317286 containerd[1452]: time="2025-04-30T00:12:35.317259683Z" level=info msg="TearDown network for sandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" successfully" Apr 30 00:12:35.320367 containerd[1452]: time="2025-04-30T00:12:35.319976776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.320367 containerd[1452]: time="2025-04-30T00:12:35.320031298Z" level=info msg="RemovePodSandbox \"d358963d13c181b9b166e8194e19f516826ac482d422109d23187a4498c9bab6\" returns successfully" Apr 30 00:12:35.321459 containerd[1452]: time="2025-04-30T00:12:35.321060213Z" level=info msg="StopPodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\"" Apr 30 00:12:35.321459 containerd[1452]: time="2025-04-30T00:12:35.321157136Z" level=info msg="TearDown network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" successfully" Apr 30 00:12:35.321459 containerd[1452]: time="2025-04-30T00:12:35.321166536Z" level=info msg="StopPodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" returns successfully" Apr 30 00:12:35.324917 containerd[1452]: time="2025-04-30T00:12:35.321824599Z" level=info msg="RemovePodSandbox for \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\"" Apr 30 00:12:35.327830 containerd[1452]: time="2025-04-30T00:12:35.327719960Z" level=info msg="Forcibly stopping sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\"" Apr 30 00:12:35.327974 containerd[1452]: time="2025-04-30T00:12:35.327841644Z" level=info msg="TearDown network for sandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" successfully" Apr 30 00:12:35.339652 containerd[1452]: time="2025-04-30T00:12:35.339593005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.339776 containerd[1452]: time="2025-04-30T00:12:35.339674127Z" level=info msg="RemovePodSandbox \"4f42be6dc6dfb17b384ab7cfe75bf8866f12682ea8481f8d0f43e1a94ef4a170\" returns successfully" Apr 30 00:12:35.340257 containerd[1452]: time="2025-04-30T00:12:35.340216186Z" level=info msg="StopPodSandbox for \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\"" Apr 30 00:12:35.340342 containerd[1452]: time="2025-04-30T00:12:35.340322590Z" level=info msg="TearDown network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\" successfully" Apr 30 00:12:35.340342 containerd[1452]: time="2025-04-30T00:12:35.340336390Z" level=info msg="StopPodSandbox for \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\" returns successfully" Apr 30 00:12:35.340912 containerd[1452]: time="2025-04-30T00:12:35.340856608Z" level=info msg="RemovePodSandbox for \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\"" Apr 30 00:12:35.340912 containerd[1452]: time="2025-04-30T00:12:35.340889489Z" level=info msg="Forcibly stopping sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\"" Apr 30 00:12:35.341076 containerd[1452]: time="2025-04-30T00:12:35.340971412Z" level=info msg="TearDown network for sandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\" successfully" Apr 30 00:12:35.343911 containerd[1452]: time="2025-04-30T00:12:35.343734986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.343911 containerd[1452]: time="2025-04-30T00:12:35.343791988Z" level=info msg="RemovePodSandbox \"1d6bec5782b23a32bfe4ab83c6d7f5a08b845311c21b578119ac0b866133369f\" returns successfully" Apr 30 00:12:35.344535 containerd[1452]: time="2025-04-30T00:12:35.344374128Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:35.344535 containerd[1452]: time="2025-04-30T00:12:35.344459331Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:35.344535 containerd[1452]: time="2025-04-30T00:12:35.344470531Z" level=info msg="StopPodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:35.344756 containerd[1452]: time="2025-04-30T00:12:35.344733500Z" level=info msg="RemovePodSandbox for \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:35.344803 containerd[1452]: time="2025-04-30T00:12:35.344762501Z" level=info msg="Forcibly stopping sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\"" Apr 30 00:12:35.344866 containerd[1452]: time="2025-04-30T00:12:35.344834783Z" level=info msg="TearDown network for sandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" successfully" Apr 30 00:12:35.347792 containerd[1452]: time="2025-04-30T00:12:35.347744803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.347879 containerd[1452]: time="2025-04-30T00:12:35.347839126Z" level=info msg="RemovePodSandbox \"7bc0565abd00c8d3f214592ed2ea0bbd3fcf62991e9581eef76783968dea637f\" returns successfully" Apr 30 00:12:35.348362 containerd[1452]: time="2025-04-30T00:12:35.348219859Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:35.348362 containerd[1452]: time="2025-04-30T00:12:35.348297822Z" level=info msg="TearDown network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" successfully" Apr 30 00:12:35.348362 containerd[1452]: time="2025-04-30T00:12:35.348307422Z" level=info msg="StopPodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" returns successfully" Apr 30 00:12:35.349687 containerd[1452]: time="2025-04-30T00:12:35.348561591Z" level=info msg="RemovePodSandbox for \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:35.349687 containerd[1452]: time="2025-04-30T00:12:35.348585791Z" level=info msg="Forcibly stopping sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\"" Apr 30 00:12:35.349687 containerd[1452]: time="2025-04-30T00:12:35.348637873Z" level=info msg="TearDown network for sandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" successfully" Apr 30 00:12:35.352058 containerd[1452]: time="2025-04-30T00:12:35.351888624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.352382 containerd[1452]: time="2025-04-30T00:12:35.352355920Z" level=info msg="RemovePodSandbox \"f57d82a960fc5be621eaf7e39dc063980047e91c55d6ee0b2a119790df8df81e\" returns successfully" Apr 30 00:12:35.352943 containerd[1452]: time="2025-04-30T00:12:35.352916859Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" Apr 30 00:12:35.353048 containerd[1452]: time="2025-04-30T00:12:35.353009182Z" level=info msg="TearDown network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" successfully" Apr 30 00:12:35.353048 containerd[1452]: time="2025-04-30T00:12:35.353023863Z" level=info msg="StopPodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" returns successfully" Apr 30 00:12:35.353335 containerd[1452]: time="2025-04-30T00:12:35.353291592Z" level=info msg="RemovePodSandbox for \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" Apr 30 00:12:35.353335 containerd[1452]: time="2025-04-30T00:12:35.353322233Z" level=info msg="Forcibly stopping sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\"" Apr 30 00:12:35.353448 containerd[1452]: time="2025-04-30T00:12:35.353386395Z" level=info msg="TearDown network for sandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" successfully" Apr 30 00:12:35.362785 containerd[1452]: time="2025-04-30T00:12:35.362732514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.362860 containerd[1452]: time="2025-04-30T00:12:35.362797956Z" level=info msg="RemovePodSandbox \"7939c348df13f00752bf51209266411ce0050b5a578870b6cbe722394a205f8b\" returns successfully" Apr 30 00:12:35.363388 containerd[1452]: time="2025-04-30T00:12:35.363217090Z" level=info msg="StopPodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\"" Apr 30 00:12:35.363388 containerd[1452]: time="2025-04-30T00:12:35.363320814Z" level=info msg="TearDown network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" successfully" Apr 30 00:12:35.363388 containerd[1452]: time="2025-04-30T00:12:35.363330014Z" level=info msg="StopPodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" returns successfully" Apr 30 00:12:35.363575 containerd[1452]: time="2025-04-30T00:12:35.363541981Z" level=info msg="RemovePodSandbox for \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\"" Apr 30 00:12:35.363575 containerd[1452]: time="2025-04-30T00:12:35.363569902Z" level=info msg="Forcibly stopping sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\"" Apr 30 00:12:35.363647 containerd[1452]: time="2025-04-30T00:12:35.363639425Z" level=info msg="TearDown network for sandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" successfully" Apr 30 00:12:35.366634 containerd[1452]: time="2025-04-30T00:12:35.366596606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.367100 containerd[1452]: time="2025-04-30T00:12:35.366658208Z" level=info msg="RemovePodSandbox \"e755483e6217fd28103dd12644a193d4943d6b15618e45def9205cadf9ac151d\" returns successfully" Apr 30 00:12:35.367370 containerd[1452]: time="2025-04-30T00:12:35.367270549Z" level=info msg="StopPodSandbox for \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\"" Apr 30 00:12:35.367525 containerd[1452]: time="2025-04-30T00:12:35.367437954Z" level=info msg="TearDown network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\" successfully" Apr 30 00:12:35.367525 containerd[1452]: time="2025-04-30T00:12:35.367455435Z" level=info msg="StopPodSandbox for \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\" returns successfully" Apr 30 00:12:35.367749 containerd[1452]: time="2025-04-30T00:12:35.367669562Z" level=info msg="RemovePodSandbox for \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\"" Apr 30 00:12:35.367749 containerd[1452]: time="2025-04-30T00:12:35.367712444Z" level=info msg="Forcibly stopping sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\"" Apr 30 00:12:35.367855 containerd[1452]: time="2025-04-30T00:12:35.367783486Z" level=info msg="TearDown network for sandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\" successfully" Apr 30 00:12:35.370278 containerd[1452]: time="2025-04-30T00:12:35.370241650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.370349 containerd[1452]: time="2025-04-30T00:12:35.370301132Z" level=info msg="RemovePodSandbox \"458da6af9d8a1e1f5abcb356f93a5b5cffbeb8e4f0853490015a2a43ce385f62\" returns successfully" Apr 30 00:12:35.370850 containerd[1452]: time="2025-04-30T00:12:35.370654384Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:35.370850 containerd[1452]: time="2025-04-30T00:12:35.370765268Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:35.370850 containerd[1452]: time="2025-04-30T00:12:35.370783628Z" level=info msg="StopPodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:35.371713 containerd[1452]: time="2025-04-30T00:12:35.371196883Z" level=info msg="RemovePodSandbox for \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:35.371713 containerd[1452]: time="2025-04-30T00:12:35.371233324Z" level=info msg="Forcibly stopping sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\"" Apr 30 00:12:35.371713 containerd[1452]: time="2025-04-30T00:12:35.371304006Z" level=info msg="TearDown network for sandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" successfully" Apr 30 00:12:35.374067 containerd[1452]: time="2025-04-30T00:12:35.374022499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.374131 containerd[1452]: time="2025-04-30T00:12:35.374084821Z" level=info msg="RemovePodSandbox \"a148fb5eebaaaeda79c015b070962e0e4736eba13c631b58212ac466f4714705\" returns successfully" Apr 30 00:12:35.374451 containerd[1452]: time="2025-04-30T00:12:35.374429713Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:35.374534 containerd[1452]: time="2025-04-30T00:12:35.374512876Z" level=info msg="TearDown network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" successfully" Apr 30 00:12:35.374534 containerd[1452]: time="2025-04-30T00:12:35.374532396Z" level=info msg="StopPodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" returns successfully" Apr 30 00:12:35.375723 containerd[1452]: time="2025-04-30T00:12:35.374901329Z" level=info msg="RemovePodSandbox for \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:35.375723 containerd[1452]: time="2025-04-30T00:12:35.374931290Z" level=info msg="Forcibly stopping sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\"" Apr 30 00:12:35.375723 containerd[1452]: time="2025-04-30T00:12:35.374995052Z" level=info msg="TearDown network for sandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" successfully" Apr 30 00:12:35.377704 containerd[1452]: time="2025-04-30T00:12:35.377652663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.377776 containerd[1452]: time="2025-04-30T00:12:35.377729945Z" level=info msg="RemovePodSandbox \"872448b227a732d7bd69a8e90d0b2cf7d94e4432fbfe29ae5dc79d64e3d938ed\" returns successfully" Apr 30 00:12:35.379002 containerd[1452]: time="2025-04-30T00:12:35.378973988Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" Apr 30 00:12:35.379081 containerd[1452]: time="2025-04-30T00:12:35.379065351Z" level=info msg="TearDown network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" successfully" Apr 30 00:12:35.379111 containerd[1452]: time="2025-04-30T00:12:35.379079151Z" level=info msg="StopPodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" returns successfully" Apr 30 00:12:35.379519 containerd[1452]: time="2025-04-30T00:12:35.379420923Z" level=info msg="RemovePodSandbox for \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" Apr 30 00:12:35.379519 containerd[1452]: time="2025-04-30T00:12:35.379503006Z" level=info msg="Forcibly stopping sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\"" Apr 30 00:12:35.379599 containerd[1452]: time="2025-04-30T00:12:35.379580728Z" level=info msg="TearDown network for sandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" successfully" Apr 30 00:12:35.382565 containerd[1452]: time="2025-04-30T00:12:35.382526309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.382626 containerd[1452]: time="2025-04-30T00:12:35.382591031Z" level=info msg="RemovePodSandbox \"e82d037170125ea836443e8b1fbb65a53f250e8498781f6b5b51f485261d3b43\" returns successfully" Apr 30 00:12:35.382975 containerd[1452]: time="2025-04-30T00:12:35.382937043Z" level=info msg="StopPodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\"" Apr 30 00:12:35.383114 containerd[1452]: time="2025-04-30T00:12:35.383080288Z" level=info msg="TearDown network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" successfully" Apr 30 00:12:35.383114 containerd[1452]: time="2025-04-30T00:12:35.383098448Z" level=info msg="StopPodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" returns successfully" Apr 30 00:12:35.384633 containerd[1452]: time="2025-04-30T00:12:35.383397179Z" level=info msg="RemovePodSandbox for \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\"" Apr 30 00:12:35.384633 containerd[1452]: time="2025-04-30T00:12:35.383430500Z" level=info msg="Forcibly stopping sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\"" Apr 30 00:12:35.384633 containerd[1452]: time="2025-04-30T00:12:35.383493302Z" level=info msg="TearDown network for sandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" successfully" Apr 30 00:12:35.386823 containerd[1452]: time="2025-04-30T00:12:35.386786014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.386959 containerd[1452]: time="2025-04-30T00:12:35.386943100Z" level=info msg="RemovePodSandbox \"66c88b21b8cf7b4ac660d8b73c3f4d768584174e2e7dee1c624b40509d10fd9f\" returns successfully" Apr 30 00:12:35.387388 containerd[1452]: time="2025-04-30T00:12:35.387363234Z" level=info msg="StopPodSandbox for \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\"" Apr 30 00:12:35.387488 containerd[1452]: time="2025-04-30T00:12:35.387447957Z" level=info msg="TearDown network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\" successfully" Apr 30 00:12:35.387488 containerd[1452]: time="2025-04-30T00:12:35.387456917Z" level=info msg="StopPodSandbox for \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\" returns successfully" Apr 30 00:12:35.388750 containerd[1452]: time="2025-04-30T00:12:35.387732446Z" level=info msg="RemovePodSandbox for \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\"" Apr 30 00:12:35.388750 containerd[1452]: time="2025-04-30T00:12:35.387760847Z" level=info msg="Forcibly stopping sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\"" Apr 30 00:12:35.388750 containerd[1452]: time="2025-04-30T00:12:35.387828650Z" level=info msg="TearDown network for sandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\" successfully" Apr 30 00:12:35.390554 containerd[1452]: time="2025-04-30T00:12:35.390505981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.390618 containerd[1452]: time="2025-04-30T00:12:35.390567463Z" level=info msg="RemovePodSandbox \"a8c9c0e9c879677d9cf1d3bb7e4325894e476f319f388b1898a899de6365c6ba\" returns successfully" Apr 30 00:12:35.391011 containerd[1452]: time="2025-04-30T00:12:35.390984117Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:35.391105 containerd[1452]: time="2025-04-30T00:12:35.391088881Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:35.391105 containerd[1452]: time="2025-04-30T00:12:35.391103601Z" level=info msg="StopPodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:35.391332 containerd[1452]: time="2025-04-30T00:12:35.391294008Z" level=info msg="RemovePodSandbox for \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:35.391332 containerd[1452]: time="2025-04-30T00:12:35.391319289Z" level=info msg="Forcibly stopping sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\"" Apr 30 00:12:35.391398 containerd[1452]: time="2025-04-30T00:12:35.391381491Z" level=info msg="TearDown network for sandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" successfully" Apr 30 00:12:35.393913 containerd[1452]: time="2025-04-30T00:12:35.393873736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.393964 containerd[1452]: time="2025-04-30T00:12:35.393935578Z" level=info msg="RemovePodSandbox \"52e240433a75b1f6ef11afecc92e91dfbbaf8bb687071d9919ed8f4772e81a58\" returns successfully" Apr 30 00:12:35.394282 containerd[1452]: time="2025-04-30T00:12:35.394251149Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:35.394422 containerd[1452]: time="2025-04-30T00:12:35.394349072Z" level=info msg="TearDown network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" successfully" Apr 30 00:12:35.394422 containerd[1452]: time="2025-04-30T00:12:35.394411634Z" level=info msg="StopPodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" returns successfully" Apr 30 00:12:35.394648 containerd[1452]: time="2025-04-30T00:12:35.394615561Z" level=info msg="RemovePodSandbox for \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:35.394697 containerd[1452]: time="2025-04-30T00:12:35.394649322Z" level=info msg="Forcibly stopping sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\"" Apr 30 00:12:35.394750 containerd[1452]: time="2025-04-30T00:12:35.394731045Z" level=info msg="TearDown network for sandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" successfully" Apr 30 00:12:35.398304 containerd[1452]: time="2025-04-30T00:12:35.398234485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.398426 containerd[1452]: time="2025-04-30T00:12:35.398343848Z" level=info msg="RemovePodSandbox \"04c53eccb3fc4323dfcbde80eae227893fe2cce6b7eccdd022a123b965ffe239\" returns successfully" Apr 30 00:12:35.398715 containerd[1452]: time="2025-04-30T00:12:35.398674340Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" Apr 30 00:12:35.398829 containerd[1452]: time="2025-04-30T00:12:35.398804384Z" level=info msg="TearDown network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" successfully" Apr 30 00:12:35.398878 containerd[1452]: time="2025-04-30T00:12:35.398828985Z" level=info msg="StopPodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" returns successfully" Apr 30 00:12:35.400984 containerd[1452]: time="2025-04-30T00:12:35.400605606Z" level=info msg="RemovePodSandbox for \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" Apr 30 00:12:35.401029 containerd[1452]: time="2025-04-30T00:12:35.400985939Z" level=info msg="Forcibly stopping sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\"" Apr 30 00:12:35.401186 containerd[1452]: time="2025-04-30T00:12:35.401162665Z" level=info msg="TearDown network for sandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" successfully" Apr 30 00:12:35.407828 containerd[1452]: time="2025-04-30T00:12:35.407783090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.407956 containerd[1452]: time="2025-04-30T00:12:35.407852733Z" level=info msg="RemovePodSandbox \"6ed9149aa34a8d18b19a3ef0ecb51ad4cebb92552905d2bf75bdb04be65ada31\" returns successfully" Apr 30 00:12:35.408285 containerd[1452]: time="2025-04-30T00:12:35.408243946Z" level=info msg="StopPodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\"" Apr 30 00:12:35.408377 containerd[1452]: time="2025-04-30T00:12:35.408341869Z" level=info msg="TearDown network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" successfully" Apr 30 00:12:35.408377 containerd[1452]: time="2025-04-30T00:12:35.408356430Z" level=info msg="StopPodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" returns successfully" Apr 30 00:12:35.408857 containerd[1452]: time="2025-04-30T00:12:35.408829286Z" level=info msg="RemovePodSandbox for \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\"" Apr 30 00:12:35.408857 containerd[1452]: time="2025-04-30T00:12:35.408855727Z" level=info msg="Forcibly stopping sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\"" Apr 30 00:12:35.408958 containerd[1452]: time="2025-04-30T00:12:35.408919209Z" level=info msg="TearDown network for sandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" successfully" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.426202839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.426275521Z" level=info msg="RemovePodSandbox \"294ad2ebc34bc03486141b930346c9348ea21cd68a562b0f55b1a12e7babff1a\" returns successfully" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.426726816Z" level=info msg="StopPodSandbox for \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\"" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.426832460Z" level=info msg="TearDown network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\" successfully" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.426843460Z" level=info msg="StopPodSandbox for \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\" returns successfully" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.427224353Z" level=info msg="RemovePodSandbox for \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\"" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.427248194Z" level=info msg="Forcibly stopping sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\"" Apr 30 00:12:35.427349 containerd[1452]: time="2025-04-30T00:12:35.427339037Z" level=info msg="TearDown network for sandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\" successfully" Apr 30 00:12:35.430368 containerd[1452]: time="2025-04-30T00:12:35.430325659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.430434 containerd[1452]: time="2025-04-30T00:12:35.430388181Z" level=info msg="RemovePodSandbox \"602f8042f926d474eef87a974a52272224600abd1e39aef3b43e8f706a5d6cc2\" returns successfully" Apr 30 00:12:35.430814 containerd[1452]: time="2025-04-30T00:12:35.430780275Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:35.430914 containerd[1452]: time="2025-04-30T00:12:35.430889398Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:35.430914 containerd[1452]: time="2025-04-30T00:12:35.430907839Z" level=info msg="StopPodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:35.431248 containerd[1452]: time="2025-04-30T00:12:35.431219730Z" level=info msg="RemovePodSandbox for \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:35.431274 containerd[1452]: time="2025-04-30T00:12:35.431250211Z" level=info msg="Forcibly stopping sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\"" Apr 30 00:12:35.431328 containerd[1452]: time="2025-04-30T00:12:35.431314973Z" level=info msg="TearDown network for sandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" successfully" Apr 30 00:12:35.434248 containerd[1452]: time="2025-04-30T00:12:35.434196831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.434335 containerd[1452]: time="2025-04-30T00:12:35.434261953Z" level=info msg="RemovePodSandbox \"574d304fca9b6232c590c4d92b1a63a3fec96ebcca311dde12539db9b9c626cc\" returns successfully" Apr 30 00:12:35.434755 containerd[1452]: time="2025-04-30T00:12:35.434720409Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:35.434859 containerd[1452]: time="2025-04-30T00:12:35.434835933Z" level=info msg="TearDown network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" successfully" Apr 30 00:12:35.434859 containerd[1452]: time="2025-04-30T00:12:35.434853854Z" level=info msg="StopPodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" returns successfully" Apr 30 00:12:35.435223 containerd[1452]: time="2025-04-30T00:12:35.435177585Z" level=info msg="RemovePodSandbox for \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:35.435254 containerd[1452]: time="2025-04-30T00:12:35.435225266Z" level=info msg="Forcibly stopping sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\"" Apr 30 00:12:35.435376 containerd[1452]: time="2025-04-30T00:12:35.435294589Z" level=info msg="TearDown network for sandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" successfully" Apr 30 00:12:35.438949 containerd[1452]: time="2025-04-30T00:12:35.438904272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.439008 containerd[1452]: time="2025-04-30T00:12:35.438972034Z" level=info msg="RemovePodSandbox \"800faf38c496c56537522e967afc5e75b89f8ffa39e36cce11dd51e5808bcdf2\" returns successfully" Apr 30 00:12:35.439478 containerd[1452]: time="2025-04-30T00:12:35.439443930Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" Apr 30 00:12:35.439575 containerd[1452]: time="2025-04-30T00:12:35.439553894Z" level=info msg="TearDown network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" successfully" Apr 30 00:12:35.439575 containerd[1452]: time="2025-04-30T00:12:35.439568494Z" level=info msg="StopPodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" returns successfully" Apr 30 00:12:35.439885 containerd[1452]: time="2025-04-30T00:12:35.439857224Z" level=info msg="RemovePodSandbox for \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" Apr 30 00:12:35.447287 containerd[1452]: time="2025-04-30T00:12:35.447220675Z" level=info msg="Forcibly stopping sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\"" Apr 30 00:12:35.447364 containerd[1452]: time="2025-04-30T00:12:35.447348120Z" level=info msg="TearDown network for sandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" successfully" Apr 30 00:12:35.450185 containerd[1452]: time="2025-04-30T00:12:35.450143975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.450243 containerd[1452]: time="2025-04-30T00:12:35.450202417Z" level=info msg="RemovePodSandbox \"fc49ac776d42bea922f885e3e4fea36f8e1bfe797dc010b419a212888befd3d9\" returns successfully" Apr 30 00:12:35.450615 containerd[1452]: time="2025-04-30T00:12:35.450587950Z" level=info msg="StopPodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\"" Apr 30 00:12:35.450738 containerd[1452]: time="2025-04-30T00:12:35.450713874Z" level=info msg="TearDown network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" successfully" Apr 30 00:12:35.450738 containerd[1452]: time="2025-04-30T00:12:35.450729555Z" level=info msg="StopPodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" returns successfully" Apr 30 00:12:35.450986 containerd[1452]: time="2025-04-30T00:12:35.450959083Z" level=info msg="RemovePodSandbox for \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\"" Apr 30 00:12:35.451045 containerd[1452]: time="2025-04-30T00:12:35.450986164Z" level=info msg="Forcibly stopping sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\"" Apr 30 00:12:35.451073 containerd[1452]: time="2025-04-30T00:12:35.451045686Z" level=info msg="TearDown network for sandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" successfully" Apr 30 00:12:35.453628 containerd[1452]: time="2025-04-30T00:12:35.453574132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.453694 containerd[1452]: time="2025-04-30T00:12:35.453633014Z" level=info msg="RemovePodSandbox \"1d54d1de467feabc868ad4eb2a100365bc375a4ec9dc0ff0931cea865f3c8ea9\" returns successfully" Apr 30 00:12:35.454029 containerd[1452]: time="2025-04-30T00:12:35.454000787Z" level=info msg="StopPodSandbox for \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\"" Apr 30 00:12:35.454289 containerd[1452]: time="2025-04-30T00:12:35.454188953Z" level=info msg="TearDown network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\" successfully" Apr 30 00:12:35.454289 containerd[1452]: time="2025-04-30T00:12:35.454205274Z" level=info msg="StopPodSandbox for \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\" returns successfully" Apr 30 00:12:35.454473 containerd[1452]: time="2025-04-30T00:12:35.454448522Z" level=info msg="RemovePodSandbox for \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\"" Apr 30 00:12:35.454506 containerd[1452]: time="2025-04-30T00:12:35.454479723Z" level=info msg="Forcibly stopping sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\"" Apr 30 00:12:35.454571 containerd[1452]: time="2025-04-30T00:12:35.454557406Z" level=info msg="TearDown network for sandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\" successfully" Apr 30 00:12:35.457451 containerd[1452]: time="2025-04-30T00:12:35.457401623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:12:35.457978 containerd[1452]: time="2025-04-30T00:12:35.457481345Z" level=info msg="RemovePodSandbox \"d8d9b52ab3d8c8d01b00e27861a8020660e203a7fc2b656f031c0aec39c2781d\" returns successfully" Apr 30 00:12:40.038227 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:38724.service - OpenSSH per-connection server daemon (10.0.0.1:38724). Apr 30 00:12:40.091252 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 38724 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:40.092676 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:40.098775 systemd-logind[1431]: New session 20 of user core. Apr 30 00:12:40.108944 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:12:40.254719 sshd[5840]: Connection closed by 10.0.0.1 port 38724 Apr 30 00:12:40.255014 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:40.257768 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:38724.service: Deactivated successfully. Apr 30 00:12:40.260053 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:12:40.261892 systemd-logind[1431]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:12:40.262822 systemd-logind[1431]: Removed session 20. Apr 30 00:12:45.267247 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:49276.service - OpenSSH per-connection server daemon (10.0.0.1:49276). Apr 30 00:12:45.338213 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 49276 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:12:45.339224 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:12:45.343271 systemd-logind[1431]: New session 21 of user core. Apr 30 00:12:45.355887 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:12:45.495944 sshd[5860]: Connection closed by 10.0.0.1 port 49276 Apr 30 00:12:45.495755 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Apr 30 00:12:45.499956 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:49276.service: Deactivated successfully. Apr 30 00:12:45.502784 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:12:45.504603 systemd-logind[1431]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:12:45.506159 systemd-logind[1431]: Removed session 21.