Jan 29 11:13:29.899606 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:13:29.899627 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:13:29.899637 kernel: KASLR enabled Jan 29 11:13:29.899643 kernel: efi: EFI v2.7 by EDK II Jan 29 11:13:29.899648 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:13:29.899654 kernel: random: crng init done Jan 29 11:13:29.899661 kernel: secureboot: Secure boot disabled Jan 29 11:13:29.899666 kernel: ACPI: Early table checksum verification disabled Jan 29 11:13:29.899672 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:13:29.899679 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:13:29.899685 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899691 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899697 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899703 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899710 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899717 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899724 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899730 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899736 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:29.899750 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:13:29.899758 kernel: NUMA: Failed to initialise from firmware Jan 29 11:13:29.899764 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:13:29.899770 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:13:29.899776 kernel: Zone ranges: Jan 29 11:13:29.899794 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:13:29.899802 kernel: DMA32 empty Jan 29 11:13:29.899808 kernel: Normal empty Jan 29 11:13:29.899814 kernel: Movable zone start for each node Jan 29 11:13:29.899820 kernel: Early memory node ranges Jan 29 11:13:29.899826 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:13:29.899832 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:13:29.899839 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:13:29.899845 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:13:29.899851 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:13:29.899857 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:13:29.899863 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:13:29.899869 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:13:29.899877 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:13:29.899883 kernel: psci: probing for conduit method from ACPI. Jan 29 11:13:29.899889 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:13:29.899898 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:13:29.899904 kernel: psci: Trusted OS migration not required Jan 29 11:13:29.899911 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:13:29.899919 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:13:29.899925 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:13:29.899932 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:13:29.899939 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:13:29.899945 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:13:29.899952 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:13:29.899958 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:13:29.899965 kernel: CPU features: detected: Spectre-v4 Jan 29 11:13:29.899971 kernel: CPU features: detected: Spectre-BHB Jan 29 11:13:29.899978 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:13:29.899986 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:13:29.899992 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:13:29.899999 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:13:29.900005 kernel: alternatives: applying boot alternatives Jan 29 11:13:29.900013 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:13:29.900020 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:13:29.900026 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:13:29.900033 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:13:29.900039 kernel: Fallback order for Node 0: 0 Jan 29 11:13:29.900046 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:13:29.900052 kernel: Policy zone: DMA Jan 29 11:13:29.900060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:13:29.900067 kernel: software IO TLB: area num 4. Jan 29 11:13:29.900073 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:13:29.900080 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 29 11:13:29.900087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:13:29.900093 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:13:29.900100 kernel: rcu: RCU event tracing is enabled. Jan 29 11:13:29.900107 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:13:29.900114 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:13:29.900120 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:13:29.900127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:13:29.900134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:13:29.900141 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:13:29.900148 kernel: GICv3: 256 SPIs implemented Jan 29 11:13:29.900154 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:13:29.900161 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:13:29.900168 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:13:29.900174 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:13:29.900181 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:13:29.900187 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:13:29.900194 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:13:29.900201 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:13:29.900207 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:13:29.900215 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:13:29.900222 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:13:29.900228 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:13:29.900235 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:13:29.900242 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:13:29.900248 kernel: arm-pv: using stolen time PV Jan 29 11:13:29.900255 kernel: Console: colour dummy device 80x25 Jan 29 11:13:29.900262 kernel: ACPI: Core revision 20230628 Jan 29 11:13:29.900269 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:13:29.900275 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:13:29.900283 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:13:29.900290 kernel: landlock: Up and running. Jan 29 11:13:29.900296 kernel: SELinux: Initializing. Jan 29 11:13:29.900303 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:13:29.900310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:13:29.900317 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:13:29.900323 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:13:29.900330 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:13:29.900337 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:13:29.900345 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:13:29.900351 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:13:29.900358 kernel: Remapping and enabling EFI services. Jan 29 11:13:29.900365 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:13:29.900371 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:13:29.900378 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:13:29.900385 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:13:29.900392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:13:29.900398 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:13:29.900405 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:13:29.900413 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:13:29.900420 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:13:29.900431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:13:29.900439 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:13:29.900447 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:13:29.900454 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:13:29.900461 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:13:29.900468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:13:29.900475 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:13:29.900483 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:13:29.900489 kernel: SMP: Total of 4 processors activated. Jan 29 11:13:29.900496 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:13:29.900504 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:13:29.900511 kernel: CPU features: detected: Common not Private translations Jan 29 11:13:29.900518 kernel: CPU features: detected: CRC32 instructions Jan 29 11:13:29.900524 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:13:29.900531 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:13:29.900550 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:13:29.900563 kernel: CPU features: detected: Privileged Access Never Jan 29 11:13:29.900571 kernel: CPU features: detected: RAS Extension Support Jan 29 11:13:29.900578 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:13:29.900585 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:13:29.900592 kernel: alternatives: applying system-wide alternatives Jan 29 11:13:29.900614 kernel: devtmpfs: initialized Jan 29 11:13:29.900622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:13:29.900630 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:13:29.900638 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:13:29.900646 kernel: SMBIOS 3.0.0 present. Jan 29 11:13:29.900653 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:13:29.900660 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:13:29.900667 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:13:29.900675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:13:29.900682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:13:29.900690 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:13:29.900697 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 29 11:13:29.900705 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:13:29.900712 kernel: cpuidle: using governor menu Jan 29 11:13:29.900719 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:13:29.900727 kernel: ASID allocator initialised with 32768 entries Jan 29 11:13:29.900734 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:13:29.900741 kernel: Serial: AMBA PL011 UART driver Jan 29 11:13:29.900752 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:13:29.900759 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:13:29.900766 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:13:29.900774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:13:29.900781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:13:29.900788 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:13:29.900796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:13:29.900803 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:13:29.900810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:13:29.900817 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:13:29.900824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:13:29.900831 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:13:29.900840 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:13:29.900847 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:13:29.900854 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:13:29.900861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:13:29.900867 kernel: ACPI: Interpreter enabled Jan 29 11:13:29.900874 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:13:29.900881 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:13:29.900888 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:13:29.900896 kernel: printk: console [ttyAMA0] enabled Jan 29 11:13:29.900904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:13:29.901027 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:13:29.901098 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:13:29.901163 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:13:29.901225 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:13:29.901286 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:13:29.901295 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:13:29.901304 kernel: PCI host bridge to bus 0000:00 Jan 29 11:13:29.901377 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:13:29.901434 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:13:29.901492 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:13:29.901581 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:13:29.901662 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:13:29.901740 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:13:29.901818 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:13:29.901898 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:13:29.901962 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:13:29.902026 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:13:29.902090 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:13:29.902153 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:13:29.902210 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:13:29.902269 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:13:29.902324 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:13:29.902334 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:13:29.902341 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:13:29.902348 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:13:29.902355 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:13:29.902362 kernel: iommu: Default domain type: Translated Jan 29 11:13:29.902369 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:13:29.902378 kernel: efivars: Registered efivars operations Jan 29 11:13:29.902386 kernel: vgaarb: loaded Jan 29 11:13:29.902393 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:13:29.902400 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:13:29.902407 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:13:29.902414 kernel: pnp: PnP ACPI init Jan 29 11:13:29.902482 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:13:29.902493 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:13:29.902501 kernel: NET: Registered PF_INET protocol family Jan 29 11:13:29.902509 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:13:29.902516 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:13:29.902523 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:13:29.902531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:13:29.902609 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:13:29.902617 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:13:29.902624 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:13:29.902631 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:13:29.902642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:13:29.902649 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:13:29.902656 kernel: kvm [1]: HYP mode not available Jan 29 11:13:29.902663 kernel: Initialise system trusted keyrings Jan 29 11:13:29.902670 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:13:29.902677 kernel: Key type asymmetric registered Jan 29 11:13:29.902684 kernel: Asymmetric key parser 'x509' registered Jan 29 11:13:29.902702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:13:29.902709 kernel: io scheduler mq-deadline registered Jan 29 11:13:29.902717 kernel: io scheduler kyber registered Jan 29 11:13:29.902724 kernel: io scheduler bfq registered Jan 29 11:13:29.902732 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:13:29.902739 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:13:29.902752 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:13:29.902831 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:13:29.902841 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:13:29.902848 kernel: thunder_xcv, ver 1.0 Jan 29 11:13:29.902855 kernel: thunder_bgx, ver 1.0 Jan 29 11:13:29.902864 kernel: nicpf, ver 1.0 Jan 29 11:13:29.902871 kernel: nicvf, ver 1.0 Jan 29 11:13:29.902941 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:13:29.903002 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:13:29 UTC (1738149209) Jan 29 11:13:29.903012 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:13:29.903020 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:13:29.903027 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:13:29.903034 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:13:29.903043 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:13:29.903050 kernel: Segment Routing with IPv6 Jan 29 11:13:29.903057 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:13:29.903064 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:13:29.903071 kernel: Key type dns_resolver registered Jan 29 11:13:29.903078 kernel: registered taskstats version 1 Jan 29 11:13:29.903085 kernel: Loading compiled-in X.509 certificates Jan 29 11:13:29.903093 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:13:29.903100 kernel: Key type .fscrypt registered Jan 29 11:13:29.903108 kernel: Key type fscrypt-provisioning registered Jan 29 11:13:29.903115 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:13:29.903122 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:13:29.903129 kernel: ima: No architecture policies found Jan 29 11:13:29.903136 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:13:29.903143 kernel: clk: Disabling unused clocks Jan 29 11:13:29.903150 kernel: Freeing unused kernel memory: 39680K Jan 29 11:13:29.903157 kernel: Run /init as init process Jan 29 11:13:29.903164 kernel: with arguments: Jan 29 11:13:29.903172 kernel: /init Jan 29 11:13:29.903179 kernel: with environment: Jan 29 11:13:29.903186 kernel: HOME=/ Jan 29 11:13:29.903193 kernel: TERM=linux Jan 29 11:13:29.903200 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:13:29.903208 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:13:29.903217 systemd[1]: Detected virtualization kvm. Jan 29 11:13:29.903225 systemd[1]: Detected architecture arm64. Jan 29 11:13:29.903233 systemd[1]: Running in initrd. Jan 29 11:13:29.903241 systemd[1]: No hostname configured, using default hostname. Jan 29 11:13:29.903248 systemd[1]: Hostname set to . Jan 29 11:13:29.903255 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:13:29.903263 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:13:29.903275 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:13:29.903283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:13:29.903291 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:13:29.903300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:13:29.903307 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:13:29.903315 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:13:29.903324 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:13:29.903332 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:13:29.903339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:13:29.903348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:13:29.903356 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:13:29.903363 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:13:29.903371 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:13:29.903378 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:13:29.903388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:13:29.903396 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:13:29.903404 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:13:29.903411 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:13:29.903421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:13:29.903428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:13:29.903436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:13:29.903443 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:13:29.903451 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:13:29.903458 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:13:29.903466 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:13:29.903473 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:13:29.903481 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:13:29.903490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:13:29.903497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:29.903505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:13:29.903512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:13:29.903520 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:13:29.903528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:13:29.903567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:13:29.903576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:29.903599 systemd-journald[238]: Collecting audit messages is disabled. Jan 29 11:13:29.903620 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:13:29.903628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:13:29.903636 systemd-journald[238]: Journal started Jan 29 11:13:29.903658 systemd-journald[238]: Runtime Journal (/run/log/journal/c245acb4c08941b29c1a302e074c21b5) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:13:29.895361 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 11:13:29.905040 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:13:29.908784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:13:29.908636 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:13:29.909598 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:13:29.912626 kernel: Bridge firewalling registered Jan 29 11:13:29.912565 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 11:13:29.913567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:13:29.926676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:13:29.927701 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:29.929159 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:13:29.932470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:13:29.935049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:29.937703 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:13:29.946178 dracut-cmdline[275]: dracut-dracut-053 Jan 29 11:13:29.948632 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:13:29.964370 systemd-resolved[278]: Positive Trust Anchors: Jan 29 11:13:29.964444 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:13:29.964476 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:13:29.969173 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 29 11:13:29.970320 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:13:29.971399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:13:30.018566 kernel: SCSI subsystem initialized Jan 29 11:13:30.022555 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:13:30.030580 kernel: iscsi: registered transport (tcp) Jan 29 11:13:30.042741 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:13:30.042766 kernel: QLogic iSCSI HBA Driver Jan 29 11:13:30.084930 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:13:30.100695 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:13:30.115714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:13:30.115770 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:13:30.118569 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:13:30.165566 kernel: raid6: neonx8 gen() 15795 MB/s Jan 29 11:13:30.181554 kernel: raid6: neonx4 gen() 15659 MB/s Jan 29 11:13:30.198548 kernel: raid6: neonx2 gen() 13233 MB/s Jan 29 11:13:30.215555 kernel: raid6: neonx1 gen() 10488 MB/s Jan 29 11:13:30.232547 kernel: raid6: int64x8 gen() 6975 MB/s Jan 29 11:13:30.249551 kernel: raid6: int64x4 gen() 7347 MB/s Jan 29 11:13:30.266548 kernel: raid6: int64x2 gen() 6130 MB/s Jan 29 11:13:30.283550 kernel: raid6: int64x1 gen() 5061 MB/s Jan 29 11:13:30.283565 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Jan 29 11:13:30.300554 kernel: raid6: .... xor() 11935 MB/s, rmw enabled Jan 29 11:13:30.300568 kernel: raid6: using neon recovery algorithm Jan 29 11:13:30.305551 kernel: xor: measuring software checksum speed Jan 29 11:13:30.305567 kernel: 8regs : 19759 MB/sec Jan 29 11:13:30.306945 kernel: 32regs : 18573 MB/sec Jan 29 11:13:30.306957 kernel: arm64_neon : 27034 MB/sec Jan 29 11:13:30.306973 kernel: xor: using function: arm64_neon (27034 MB/sec) Jan 29 11:13:30.356565 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:13:30.367169 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:13:30.378711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:13:30.389682 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jan 29 11:13:30.392937 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:13:30.395295 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:13:30.410082 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 29 11:13:30.437779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:13:30.450771 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:13:30.491371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:13:30.498682 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:13:30.510913 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:13:30.512167 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:13:30.514622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:13:30.516525 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:13:30.525962 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:13:30.539024 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:13:30.552031 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:13:30.552132 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:13:30.552150 kernel: GPT:9289727 != 19775487 Jan 29 11:13:30.552159 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:13:30.552169 kernel: GPT:9289727 != 19775487 Jan 29 11:13:30.552179 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:13:30.552188 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:30.540048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:13:30.550428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:13:30.550555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:30.553607 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:13:30.554347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:13:30.554472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:30.556760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:30.563757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:30.575370 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (525) Jan 29 11:13:30.575410 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (510) Jan 29 11:13:30.577739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:30.584897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:13:30.589036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:13:30.592500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:13:30.593394 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:13:30.598774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:13:30.605718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:13:30.607208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:13:30.611539 disk-uuid[553]: Primary Header is updated. Jan 29 11:13:30.611539 disk-uuid[553]: Secondary Entries is updated. Jan 29 11:13:30.611539 disk-uuid[553]: Secondary Header is updated. Jan 29 11:13:30.614547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:30.636265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:31.629574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:31.629742 disk-uuid[554]: The operation has completed successfully. Jan 29 11:13:31.650114 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:13:31.650216 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:13:31.672714 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:13:31.675615 sh[571]: Success Jan 29 11:13:31.686584 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:13:31.714226 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:13:31.725884 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:13:31.727260 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:13:31.736690 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:13:31.736761 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:13:31.739902 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:13:31.739917 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:13:31.739927 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:13:31.744366 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:13:31.745260 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:13:31.761754 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:13:31.763170 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:13:31.770112 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:13:31.770150 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:13:31.770161 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:13:31.772562 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:13:31.779786 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:13:31.781230 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:13:31.786938 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:13:31.794764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:13:31.856721 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:13:31.868700 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:13:31.887481 ignition[661]: Ignition 2.20.0 Jan 29 11:13:31.887492 ignition[661]: Stage: fetch-offline Jan 29 11:13:31.890437 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:31.890447 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:31.890702 ignition[661]: parsed url from cmdline: "" Jan 29 11:13:31.890706 ignition[661]: no config URL provided Jan 29 11:13:31.890711 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:13:31.890718 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:13:31.890753 ignition[661]: op(1): [started] loading QEMU firmware config module Jan 29 11:13:31.890758 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:13:31.898269 systemd-networkd[764]: lo: Link UP Jan 29 11:13:31.898281 systemd-networkd[764]: lo: Gained carrier Jan 29 11:13:31.899038 systemd-networkd[764]: Enumeration completed Jan 29 11:13:31.899452 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:31.899454 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:13:31.900263 systemd-networkd[764]: eth0: Link UP Jan 29 11:13:31.900266 systemd-networkd[764]: eth0: Gained carrier Jan 29 11:13:31.904349 ignition[661]: op(1): [finished] loading QEMU firmware config module Jan 29 11:13:31.900273 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:31.901921 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:13:31.903007 systemd[1]: Reached target network.target - Network. Jan 29 11:13:31.912591 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:13:31.945660 ignition[661]: parsing config with SHA512: 59fb61ac50ff006c32ed2d9d2e9116c7dba43c56125327e0a8a84befc220081c1ce54559fc01e870c251f0737b5f6985e0608f931c0b79ae9b59c14aa610e1ef Jan 29 11:13:31.951107 unknown[661]: fetched base config from "system" Jan 29 11:13:31.951128 unknown[661]: fetched user config from "qemu" Jan 29 11:13:31.951702 ignition[661]: fetch-offline: fetch-offline passed Jan 29 11:13:31.951815 ignition[661]: Ignition finished successfully Jan 29 11:13:31.954782 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:13:31.955765 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:13:31.966698 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:13:31.976781 ignition[771]: Ignition 2.20.0 Jan 29 11:13:31.976791 ignition[771]: Stage: kargs Jan 29 11:13:31.976959 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:31.976969 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:31.977815 ignition[771]: kargs: kargs passed Jan 29 11:13:31.977859 ignition[771]: Ignition finished successfully Jan 29 11:13:31.980399 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:13:31.993810 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:13:32.003099 ignition[780]: Ignition 2.20.0 Jan 29 11:13:32.003110 ignition[780]: Stage: disks Jan 29 11:13:32.003268 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:32.003276 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:32.004164 ignition[780]: disks: disks passed Jan 29 11:13:32.005549 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:13:32.004208 ignition[780]: Ignition finished successfully Jan 29 11:13:32.006460 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:13:32.007401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:13:32.008842 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:13:32.010025 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:13:32.011404 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:13:32.025711 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:13:32.035008 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:13:32.038554 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:13:32.053688 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:13:32.098425 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:13:32.099617 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:13:32.099560 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:13:32.111613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:13:32.113443 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:13:32.114262 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:13:32.114298 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:13:32.114319 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:13:32.119213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:13:32.120739 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:13:32.124951 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Jan 29 11:13:32.124979 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:13:32.124990 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:13:32.126550 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:13:32.128731 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:13:32.129655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:13:32.156263 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:13:32.160393 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:13:32.164367 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:13:32.167839 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:13:32.238597 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:13:32.249674 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:13:32.251044 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:13:32.255550 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:13:32.269301 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:13:32.273181 ignition[912]: INFO : Ignition 2.20.0 Jan 29 11:13:32.273181 ignition[912]: INFO : Stage: mount Jan 29 11:13:32.274381 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:32.274381 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:32.274381 ignition[912]: INFO : mount: mount passed Jan 29 11:13:32.274381 ignition[912]: INFO : Ignition finished successfully Jan 29 11:13:32.277080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:13:32.283696 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:13:32.736186 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:13:32.745770 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:13:32.750552 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Jan 29 11:13:32.752686 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:13:32.752701 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:13:32.752710 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:13:32.754560 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:13:32.755628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:13:32.770961 ignition[943]: INFO : Ignition 2.20.0 Jan 29 11:13:32.770961 ignition[943]: INFO : Stage: files Jan 29 11:13:32.772193 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:32.772193 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:32.772193 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:13:32.774766 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:13:32.774766 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:13:32.774766 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:13:32.774766 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:13:32.778578 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:13:32.778578 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:13:32.778578 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:13:32.774900 unknown[943]: wrote ssh authorized keys file for user: core Jan 29 11:13:32.828039 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:13:33.001840 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:13:33.001840 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:13:33.004615 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:13:33.263795 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:13:33.511897 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:13:33.511897 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:13:33.514567 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:13:33.536042 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:13:33.539913 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:13:33.541673 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:13:33.541673 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:13:33.541673 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:13:33.541673 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:13:33.541673 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:13:33.541673 ignition[943]: INFO : files: files passed Jan 29 11:13:33.541673 ignition[943]: INFO : Ignition finished successfully Jan 29 11:13:33.544084 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:13:33.555708 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:13:33.557737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:13:33.559395 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:13:33.560211 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:13:33.564160 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:13:33.566107 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:13:33.566107 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:13:33.569663 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:13:33.568682 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:13:33.571884 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:13:33.582739 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:13:33.602453 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:13:33.602593 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:13:33.604222 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:13:33.605496 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:13:33.606833 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:13:33.607564 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:13:33.621985 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:13:33.624100 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:13:33.634774 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:13:33.635692 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:13:33.637181 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:13:33.638439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:13:33.638561 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:13:33.640546 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:13:33.642028 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:13:33.643223 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:13:33.644447 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:13:33.645866 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:13:33.647430 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:13:33.648777 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:13:33.650185 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:13:33.651587 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:13:33.652873 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:13:33.653975 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:13:33.654088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:13:33.655871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:13:33.657225 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:13:33.658606 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:13:33.660252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:13:33.661260 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:13:33.661371 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:13:33.663471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:13:33.663600 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:13:33.665082 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:13:33.666263 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:13:33.669596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:13:33.670548 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:13:33.672224 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:13:33.673363 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:13:33.673451 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:13:33.674564 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:13:33.674648 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:13:33.675762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:13:33.675866 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:13:33.677251 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:13:33.677347 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:13:33.689700 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:13:33.691696 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:13:33.692322 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:13:33.692435 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:13:33.693799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:13:33.693894 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:13:33.698794 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:13:33.698902 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:13:33.702668 systemd-networkd[764]: eth0: Gained IPv6LL Jan 29 11:13:33.704817 ignition[999]: INFO : Ignition 2.20.0 Jan 29 11:13:33.704817 ignition[999]: INFO : Stage: umount Jan 29 11:13:33.706264 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:33.706264 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:33.706264 ignition[999]: INFO : umount: umount passed Jan 29 11:13:33.706264 ignition[999]: INFO : Ignition finished successfully Jan 29 11:13:33.705069 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:13:33.707369 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:13:33.707464 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:13:33.708623 systemd[1]: Stopped target network.target - Network. Jan 29 11:13:33.710004 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:13:33.710061 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:13:33.711221 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:13:33.711263 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:13:33.712417 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:13:33.712453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:13:33.713733 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:13:33.713783 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:13:33.715209 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:13:33.716650 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:13:33.721123 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 29 11:13:33.723054 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:13:33.723173 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:13:33.724907 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:13:33.725091 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:13:33.727192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:13:33.727241 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:13:33.736684 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:13:33.737365 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:13:33.737425 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:13:33.738913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:13:33.738951 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:33.740304 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:13:33.740345 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:13:33.742021 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:13:33.742065 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:13:33.743487 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:13:33.753830 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:13:33.753962 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:13:33.758288 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:13:33.758427 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:13:33.760076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:13:33.760113 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:13:33.761429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:13:33.761460 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:13:33.762335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:13:33.762379 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:13:33.764374 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:13:33.764421 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:13:33.766545 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:13:33.766592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:33.776742 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:13:33.777560 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:13:33.777612 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:13:33.779194 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:13:33.779230 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:13:33.780669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:13:33.780708 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:13:33.782279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:13:33.782318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:33.784000 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:13:33.784083 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:13:33.786424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:13:33.786512 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:13:33.788429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:13:33.789954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:13:33.790011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:13:33.792200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:13:33.814201 systemd[1]: Switching root. Jan 29 11:13:33.835419 systemd-journald[238]: Journal stopped Jan 29 11:13:34.513757 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 29 11:13:34.513813 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:13:34.513826 kernel: SELinux: policy capability open_perms=1 Jan 29 11:13:34.513835 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:13:34.513848 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:13:34.513857 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:13:34.513866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:13:34.513876 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:13:34.513885 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:13:34.513898 kernel: audit: type=1403 audit(1738149213.996:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:13:34.513908 systemd[1]: Successfully loaded SELinux policy in 35.874ms. Jan 29 11:13:34.513928 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.406ms. Jan 29 11:13:34.513940 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:13:34.513950 systemd[1]: Detected virtualization kvm. Jan 29 11:13:34.513960 systemd[1]: Detected architecture arm64. Jan 29 11:13:34.513972 systemd[1]: Detected first boot. Jan 29 11:13:34.513982 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:13:34.513995 zram_generator::config[1048]: No configuration found. Jan 29 11:13:34.514009 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:13:34.514018 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:13:34.514029 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:13:34.514039 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:13:34.514049 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:13:34.514059 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:13:34.514069 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:13:34.514079 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:13:34.514091 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:13:34.514102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:13:34.514112 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:13:34.514122 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:13:34.514132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:13:34.514142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:13:34.514153 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:13:34.514163 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:13:34.514175 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:13:34.514186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:13:34.514197 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:13:34.514207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:13:34.514217 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:13:34.514227 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:13:34.514237 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:13:34.514247 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:13:34.514259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:13:34.514269 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:13:34.514279 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:13:34.514290 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:13:34.514300 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:13:34.514310 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:13:34.514320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:13:34.514330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:13:34.514340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:13:34.514350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:13:34.514362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:13:34.514372 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:13:34.514382 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:13:34.514392 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:13:34.514402 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:13:34.514414 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:13:34.514425 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:13:34.514435 systemd[1]: Reached target machines.target - Containers. Jan 29 11:13:34.514446 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:13:34.514457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:34.514467 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:13:34.514477 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:13:34.514488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:34.514498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:13:34.514507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:34.514517 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:13:34.514529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:13:34.514562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:13:34.514575 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:13:34.514585 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:13:34.514594 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:13:34.514605 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:13:34.514615 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:13:34.514625 kernel: fuse: init (API version 7.39) Jan 29 11:13:34.514634 kernel: loop: module loaded Jan 29 11:13:34.514646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:13:34.514656 kernel: ACPI: bus type drm_connector registered Jan 29 11:13:34.514666 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:13:34.514676 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:13:34.514686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:13:34.514696 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:13:34.514723 systemd-journald[1108]: Collecting audit messages is disabled. Jan 29 11:13:34.514744 systemd[1]: Stopped verity-setup.service. Jan 29 11:13:34.514765 systemd-journald[1108]: Journal started Jan 29 11:13:34.514786 systemd-journald[1108]: Runtime Journal (/run/log/journal/c245acb4c08941b29c1a302e074c21b5) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:13:34.345014 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:13:34.366427 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:13:34.366802 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:13:34.516570 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:13:34.517089 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:13:34.518018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:13:34.518939 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:13:34.519842 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:13:34.520720 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:13:34.521643 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:13:34.522590 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:13:34.524857 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:13:34.524988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:13:34.526095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:34.526227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:34.527341 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:13:34.528641 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:13:34.528785 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:13:34.529953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:34.530074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:34.531187 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:13:34.531324 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:13:34.532355 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:13:34.532480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:13:34.533568 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:13:34.534690 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:13:34.535814 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:13:34.547447 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:13:34.557636 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:13:34.559440 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:13:34.560296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:13:34.560331 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:13:34.561970 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:13:34.563900 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:13:34.565677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:13:34.566498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:34.567827 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:13:34.569475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:13:34.570467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:13:34.574692 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:13:34.575626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:13:34.577678 systemd-journald[1108]: Time spent on flushing to /var/log/journal/c245acb4c08941b29c1a302e074c21b5 is 22.551ms for 854 entries. Jan 29 11:13:34.577678 systemd-journald[1108]: System Journal (/var/log/journal/c245acb4c08941b29c1a302e074c21b5) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:13:34.617853 systemd-journald[1108]: Received client request to flush runtime journal. Jan 29 11:13:34.617903 kernel: loop0: detected capacity change from 0 to 194096 Jan 29 11:13:34.576835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:13:34.582771 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:13:34.585380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:13:34.587724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:13:34.588868 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:13:34.589861 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:13:34.590927 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:13:34.596699 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:13:34.600885 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:13:34.602452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:13:34.611811 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:13:34.614533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:34.620300 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:13:34.627556 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:13:34.630596 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:13:34.631979 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:13:34.632154 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 11:13:34.632165 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 11:13:34.635026 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:13:34.637272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:13:34.648736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:13:34.652579 kernel: loop1: detected capacity change from 0 to 116808 Jan 29 11:13:34.680161 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:13:34.689762 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:13:34.693579 kernel: loop2: detected capacity change from 0 to 113536 Jan 29 11:13:34.704804 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 29 11:13:34.704824 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 29 11:13:34.709017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:13:34.724565 kernel: loop3: detected capacity change from 0 to 194096 Jan 29 11:13:34.730193 kernel: loop4: detected capacity change from 0 to 116808 Jan 29 11:13:34.733593 kernel: loop5: detected capacity change from 0 to 113536 Jan 29 11:13:34.735853 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:13:34.736222 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 29 11:13:34.740067 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:13:34.740082 systemd[1]: Reloading... Jan 29 11:13:34.790573 zram_generator::config[1207]: No configuration found. Jan 29 11:13:34.861234 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:13:34.894821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:13:34.930242 systemd[1]: Reloading finished in 189 ms. Jan 29 11:13:34.961425 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:13:34.962592 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:13:34.979725 systemd[1]: Starting ensure-sysext.service... Jan 29 11:13:34.981619 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:13:34.993907 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:13:34.993920 systemd[1]: Reloading... Jan 29 11:13:35.005374 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:13:35.005663 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:13:35.006285 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:13:35.006497 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:13:35.006561 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:13:35.008768 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:13:35.008779 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:13:35.015462 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:13:35.015479 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:13:35.044559 zram_generator::config[1274]: No configuration found. Jan 29 11:13:35.121350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:13:35.156821 systemd[1]: Reloading finished in 162 ms. Jan 29 11:13:35.171923 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:13:35.187286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:13:35.195231 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:13:35.197692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:13:35.200136 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:13:35.204926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:13:35.236822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:13:35.238818 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:13:35.240461 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:13:35.241964 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:13:35.246521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:35.250800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:35.252648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:35.255891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:13:35.256715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:35.258074 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:13:35.264983 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:13:35.265826 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:13:35.266820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:35.266948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:35.268114 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jan 29 11:13:35.268219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:35.268358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:35.274789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:35.278438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:35.280595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:35.281632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:35.281881 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:13:35.282973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:13:35.283198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:13:35.285172 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:13:35.288188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:35.288363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:35.291169 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:13:35.292905 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:13:35.294307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:35.294434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:35.301829 augenrules[1357]: No rules Jan 29 11:13:35.302971 systemd[1]: Finished ensure-sysext.service. Jan 29 11:13:35.304080 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:13:35.305571 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:13:35.312852 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:13:35.314243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:35.319737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:35.323928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:13:35.327729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:35.330130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:13:35.331386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:35.333929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:13:35.337203 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:13:35.338492 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:13:35.339028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:35.339160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:35.340961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:13:35.341089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:13:35.342857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:35.342976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:35.344316 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:13:35.344437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:13:35.350211 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:13:35.350769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:13:35.350820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:13:35.384569 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) Jan 29 11:13:35.390631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:13:35.401742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:13:35.402753 systemd-resolved[1313]: Positive Trust Anchors: Jan 29 11:13:35.402825 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:13:35.402857 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:13:35.418082 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 29 11:13:35.428667 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:13:35.446843 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:13:35.448577 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:13:35.449546 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:13:35.450565 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:13:35.455987 systemd-networkd[1385]: lo: Link UP Jan 29 11:13:35.455995 systemd-networkd[1385]: lo: Gained carrier Jan 29 11:13:35.460797 systemd-networkd[1385]: Enumeration completed Jan 29 11:13:35.460890 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:13:35.462081 systemd[1]: Reached target network.target - Network. Jan 29 11:13:35.463587 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:35.463596 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:13:35.464704 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:35.464739 systemd-networkd[1385]: eth0: Link UP Jan 29 11:13:35.464742 systemd-networkd[1385]: eth0: Gained carrier Jan 29 11:13:35.464759 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:35.469731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:13:35.481764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:35.483604 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:13:35.486589 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 11:13:35.487143 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:13:35.487196 systemd-timesyncd[1386]: Initial clock synchronization to Wed 2025-01-29 11:13:35.691825 UTC. Jan 29 11:13:35.502028 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:13:35.519847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:13:35.524042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:35.531659 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:13:35.567983 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:13:35.569113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:13:35.569974 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:13:35.570812 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:13:35.571699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:13:35.572729 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:13:35.573598 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:13:35.574476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:13:35.575409 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:13:35.575446 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:13:35.576118 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:13:35.577524 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:13:35.579567 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:13:35.590486 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:13:35.592432 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:13:35.593728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:13:35.594596 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:13:35.595263 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:13:35.596027 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:13:35.596055 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:13:35.596942 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:13:35.598652 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:13:35.601672 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:13:35.602049 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:13:35.605740 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:13:35.606823 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:13:35.608919 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:13:35.610982 jq[1419]: false Jan 29 11:13:35.611741 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:13:35.613369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:13:35.616431 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:13:35.619764 dbus-daemon[1418]: [system] SELinux support is enabled Jan 29 11:13:35.620757 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:13:35.624530 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:13:35.624974 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:13:35.625825 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:13:35.627136 extend-filesystems[1420]: Found loop3 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found loop4 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found loop5 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda1 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda2 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda3 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found usr Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda4 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda6 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda7 Jan 29 11:13:35.629613 extend-filesystems[1420]: Found vda9 Jan 29 11:13:35.629613 extend-filesystems[1420]: Checking size of /dev/vda9 Jan 29 11:13:35.628856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:13:35.630092 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:13:35.635198 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:13:35.637440 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:13:35.650884 jq[1429]: true Jan 29 11:13:35.637788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:13:35.639851 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:13:35.639994 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:13:35.648222 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:13:35.648270 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:13:35.649818 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:13:35.649850 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:13:35.663193 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:13:35.664960 jq[1441]: true Jan 29 11:13:35.668865 extend-filesystems[1420]: Resized partition /dev/vda9 Jan 29 11:13:35.671839 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:13:35.672035 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:13:35.675633 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:13:35.677557 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:13:35.682564 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) Jan 29 11:13:35.682725 tar[1434]: linux-arm64/helm Jan 29 11:13:35.688879 update_engine[1427]: I20250129 11:13:35.688084 1427 main.cc:92] Flatcar Update Engine starting Jan 29 11:13:35.695620 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:13:35.696094 update_engine[1427]: I20250129 11:13:35.695827 1427 update_check_scheduler.cc:74] Next update check in 9m22s Jan 29 11:13:35.704211 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:13:35.706671 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:13:35.718012 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:13:35.718012 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:13:35.718012 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:13:35.723603 extend-filesystems[1420]: Resized filesystem in /dev/vda9 Jan 29 11:13:35.720774 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:13:35.721472 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:13:35.729733 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:13:35.732589 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:13:35.736096 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:13:35.736886 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:13:35.737327 systemd-logind[1426]: New seat seat0. Jan 29 11:13:35.741087 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:13:35.751047 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:13:35.871833 containerd[1449]: time="2025-01-29T11:13:35.871718080Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:13:35.897831 containerd[1449]: time="2025-01-29T11:13:35.897052040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900024 containerd[1449]: time="2025-01-29T11:13:35.899872000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900024 containerd[1449]: time="2025-01-29T11:13:35.899969560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:13:35.900024 containerd[1449]: time="2025-01-29T11:13:35.899988120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:13:35.900240 containerd[1449]: time="2025-01-29T11:13:35.900199920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:13:35.900240 containerd[1449]: time="2025-01-29T11:13:35.900229360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900330 containerd[1449]: time="2025-01-29T11:13:35.900304840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900330 containerd[1449]: time="2025-01-29T11:13:35.900323040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900507 containerd[1449]: time="2025-01-29T11:13:35.900481280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900507 containerd[1449]: time="2025-01-29T11:13:35.900502760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900560 containerd[1449]: time="2025-01-29T11:13:35.900515680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900560 containerd[1449]: time="2025-01-29T11:13:35.900524440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900630 containerd[1449]: time="2025-01-29T11:13:35.900613280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900841 containerd[1449]: time="2025-01-29T11:13:35.900809800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900925 containerd[1449]: time="2025-01-29T11:13:35.900909200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:35.900948 containerd[1449]: time="2025-01-29T11:13:35.900926520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:13:35.901020 containerd[1449]: time="2025-01-29T11:13:35.901006280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:13:35.901064 containerd[1449]: time="2025-01-29T11:13:35.901052640Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:13:35.904700 containerd[1449]: time="2025-01-29T11:13:35.904663600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:13:35.904813 containerd[1449]: time="2025-01-29T11:13:35.904782440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:13:35.904813 containerd[1449]: time="2025-01-29T11:13:35.904809480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:13:35.904925 containerd[1449]: time="2025-01-29T11:13:35.904824520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:13:35.904925 containerd[1449]: time="2025-01-29T11:13:35.904920360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:13:35.905087 containerd[1449]: time="2025-01-29T11:13:35.905062240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:13:35.905563 containerd[1449]: time="2025-01-29T11:13:35.905519560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:13:35.905690 containerd[1449]: time="2025-01-29T11:13:35.905671640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:13:35.905724 containerd[1449]: time="2025-01-29T11:13:35.905696040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:13:35.905724 containerd[1449]: time="2025-01-29T11:13:35.905711240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:13:35.905766 containerd[1449]: time="2025-01-29T11:13:35.905726080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905766 containerd[1449]: time="2025-01-29T11:13:35.905738960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905766 containerd[1449]: time="2025-01-29T11:13:35.905757880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905815 containerd[1449]: time="2025-01-29T11:13:35.905771720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905815 containerd[1449]: time="2025-01-29T11:13:35.905786040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905815 containerd[1449]: time="2025-01-29T11:13:35.905797280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905815 containerd[1449]: time="2025-01-29T11:13:35.905808680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905881 containerd[1449]: time="2025-01-29T11:13:35.905819040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:13:35.905881 containerd[1449]: time="2025-01-29T11:13:35.905838480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905881 containerd[1449]: time="2025-01-29T11:13:35.905851880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905881 containerd[1449]: time="2025-01-29T11:13:35.905864000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905881 containerd[1449]: time="2025-01-29T11:13:35.905880760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905893080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905905200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905917440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905930000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905942560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905956120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.905967 containerd[1449]: time="2025-01-29T11:13:35.905967600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906083 containerd[1449]: time="2025-01-29T11:13:35.905978880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906083 containerd[1449]: time="2025-01-29T11:13:35.905992840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906083 containerd[1449]: time="2025-01-29T11:13:35.906007760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:13:35.906083 containerd[1449]: time="2025-01-29T11:13:35.906026440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906083 containerd[1449]: time="2025-01-29T11:13:35.906039120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906083 containerd[1449]: time="2025-01-29T11:13:35.906049160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:13:35.906392 containerd[1449]: time="2025-01-29T11:13:35.906357680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:13:35.906471 containerd[1449]: time="2025-01-29T11:13:35.906390400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:13:35.906500 containerd[1449]: time="2025-01-29T11:13:35.906472960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:13:35.906500 containerd[1449]: time="2025-01-29T11:13:35.906488560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:13:35.906555 containerd[1449]: time="2025-01-29T11:13:35.906498960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906596 containerd[1449]: time="2025-01-29T11:13:35.906578640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:13:35.906618 containerd[1449]: time="2025-01-29T11:13:35.906598960Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:13:35.906618 containerd[1449]: time="2025-01-29T11:13:35.906610280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:13:35.906970 containerd[1449]: time="2025-01-29T11:13:35.906920800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:13:35.907086 containerd[1449]: time="2025-01-29T11:13:35.906977760Z" level=info msg="Connect containerd service" Jan 29 11:13:35.907086 containerd[1449]: time="2025-01-29T11:13:35.907007520Z" level=info msg="using legacy CRI server" Jan 29 11:13:35.907086 containerd[1449]: time="2025-01-29T11:13:35.907070960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:13:35.907367 containerd[1449]: time="2025-01-29T11:13:35.907346000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:13:35.908340 containerd[1449]: time="2025-01-29T11:13:35.908310920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:13:35.908645 containerd[1449]: time="2025-01-29T11:13:35.908551160Z" level=info msg="Start subscribing containerd event" Jan 29 11:13:35.908645 containerd[1449]: time="2025-01-29T11:13:35.908607360Z" level=info msg="Start recovering state" Jan 29 11:13:35.908813 containerd[1449]: time="2025-01-29T11:13:35.908797680Z" level=info msg="Start event monitor" Jan 29 11:13:35.909068 containerd[1449]: time="2025-01-29T11:13:35.908866640Z" level=info msg="Start snapshots syncer" Jan 29 11:13:35.909068 containerd[1449]: time="2025-01-29T11:13:35.908881720Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:13:35.909068 containerd[1449]: time="2025-01-29T11:13:35.908889560Z" level=info msg="Start streaming server" Jan 29 11:13:35.909263 containerd[1449]: time="2025-01-29T11:13:35.909231160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:13:35.909297 containerd[1449]: time="2025-01-29T11:13:35.909289160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:13:35.909432 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:13:35.911659 containerd[1449]: time="2025-01-29T11:13:35.910773920Z" level=info msg="containerd successfully booted in 0.041786s" Jan 29 11:13:36.039704 tar[1434]: linux-arm64/LICENSE Jan 29 11:13:36.039704 tar[1434]: linux-arm64/README.md Jan 29 11:13:36.052021 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:13:36.100876 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:13:36.119227 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:13:36.130854 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:13:36.135932 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:13:36.137620 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:13:36.139891 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:13:36.153672 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:13:36.168859 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:13:36.170755 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:13:36.171731 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:13:36.906679 systemd-networkd[1385]: eth0: Gained IPv6LL Jan 29 11:13:36.912272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:13:36.913746 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:13:36.932883 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:13:36.935079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:36.936940 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:13:36.951342 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:13:36.951531 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:13:36.953332 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:13:36.957319 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:13:37.427157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:37.428633 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:13:37.429842 systemd[1]: Startup finished in 536ms (kernel) + 4.294s (initrd) + 3.471s (userspace) = 8.302s. Jan 29 11:13:37.430861 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:13:37.900334 kubelet[1531]: E0129 11:13:37.900185 1531 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:13:37.903082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:13:37.903234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:13:42.790270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:13:42.791396 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:50942.service - OpenSSH per-connection server daemon (10.0.0.1:50942). Jan 29 11:13:42.847120 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 50942 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:42.848739 sshd-session[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:42.861354 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:13:42.868906 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:13:42.870690 systemd-logind[1426]: New session 1 of user core. Jan 29 11:13:42.879617 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:13:42.883822 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:13:42.887958 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:13:42.957711 systemd[1549]: Queued start job for default target default.target. Jan 29 11:13:42.969460 systemd[1549]: Created slice app.slice - User Application Slice. Jan 29 11:13:42.969505 systemd[1549]: Reached target paths.target - Paths. Jan 29 11:13:42.969518 systemd[1549]: Reached target timers.target - Timers. Jan 29 11:13:42.970776 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:13:42.980466 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:13:42.980529 systemd[1549]: Reached target sockets.target - Sockets. Jan 29 11:13:42.980563 systemd[1549]: Reached target basic.target - Basic System. Jan 29 11:13:42.980605 systemd[1549]: Reached target default.target - Main User Target. Jan 29 11:13:42.980630 systemd[1549]: Startup finished in 87ms. Jan 29 11:13:42.980863 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:13:42.982179 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:13:43.041968 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:50948.service - OpenSSH per-connection server daemon (10.0.0.1:50948). Jan 29 11:13:43.080451 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 50948 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:43.081703 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.085378 systemd-logind[1426]: New session 2 of user core. Jan 29 11:13:43.096716 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:13:43.149396 sshd[1562]: Connection closed by 10.0.0.1 port 50948 Jan 29 11:13:43.149054 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:43.154831 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:50948.service: Deactivated successfully. Jan 29 11:13:43.156339 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:13:43.158653 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:13:43.171859 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:50964.service - OpenSSH per-connection server daemon (10.0.0.1:50964). Jan 29 11:13:43.172758 systemd-logind[1426]: Removed session 2. Jan 29 11:13:43.206518 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 50964 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:43.207587 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.211467 systemd-logind[1426]: New session 3 of user core. Jan 29 11:13:43.225693 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:13:43.274081 sshd[1569]: Connection closed by 10.0.0.1 port 50964 Jan 29 11:13:43.274713 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:43.287012 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:50964.service: Deactivated successfully. Jan 29 11:13:43.288403 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:13:43.289654 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:13:43.290746 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:50980.service - OpenSSH per-connection server daemon (10.0.0.1:50980). Jan 29 11:13:43.291507 systemd-logind[1426]: Removed session 3. Jan 29 11:13:43.329265 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 50980 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:43.330362 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.334205 systemd-logind[1426]: New session 4 of user core. Jan 29 11:13:43.351687 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:13:43.403836 sshd[1576]: Connection closed by 10.0.0.1 port 50980 Jan 29 11:13:43.404224 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:43.412922 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:50980.service: Deactivated successfully. Jan 29 11:13:43.414347 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:13:43.415725 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:13:43.416744 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:50990.service - OpenSSH per-connection server daemon (10.0.0.1:50990). Jan 29 11:13:43.417852 systemd-logind[1426]: Removed session 4. Jan 29 11:13:43.455172 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 50990 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:43.456312 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.460080 systemd-logind[1426]: New session 5 of user core. Jan 29 11:13:43.469750 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:13:43.535238 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:13:43.535512 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:43.555506 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:43.556969 sshd[1583]: Connection closed by 10.0.0.1 port 50990 Jan 29 11:13:43.557358 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:43.570992 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:50990.service: Deactivated successfully. Jan 29 11:13:43.572604 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:13:43.575953 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:13:43.577383 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:50998.service - OpenSSH per-connection server daemon (10.0.0.1:50998). Jan 29 11:13:43.578340 systemd-logind[1426]: Removed session 5. Jan 29 11:13:43.616830 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 50998 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:43.618028 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.621962 systemd-logind[1426]: New session 6 of user core. Jan 29 11:13:43.630680 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:13:43.681724 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:13:43.681995 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:43.684834 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:43.689059 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:13:43.689525 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:43.702856 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:13:43.725487 augenrules[1615]: No rules Jan 29 11:13:43.726730 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:13:43.728661 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:13:43.729672 sudo[1592]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:43.730790 sshd[1591]: Connection closed by 10.0.0.1 port 50998 Jan 29 11:13:43.731751 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:43.742705 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:50998.service: Deactivated successfully. Jan 29 11:13:43.744945 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:13:43.746079 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:13:43.747232 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:51004.service - OpenSSH per-connection server daemon (10.0.0.1:51004). Jan 29 11:13:43.747933 systemd-logind[1426]: Removed session 6. Jan 29 11:13:43.785498 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 51004 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:43.786627 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.790310 systemd-logind[1426]: New session 7 of user core. Jan 29 11:13:43.803739 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:13:43.854094 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:13:43.854368 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:44.163756 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:13:44.163890 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:13:44.413858 dockerd[1646]: time="2025-01-29T11:13:44.413738195Z" level=info msg="Starting up" Jan 29 11:13:44.556762 dockerd[1646]: time="2025-01-29T11:13:44.556702637Z" level=info msg="Loading containers: start." Jan 29 11:13:44.690649 kernel: Initializing XFRM netlink socket Jan 29 11:13:44.753852 systemd-networkd[1385]: docker0: Link UP Jan 29 11:13:44.793681 dockerd[1646]: time="2025-01-29T11:13:44.793633099Z" level=info msg="Loading containers: done." Jan 29 11:13:44.809815 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3176358795-merged.mount: Deactivated successfully. Jan 29 11:13:44.811034 dockerd[1646]: time="2025-01-29T11:13:44.810993653Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:13:44.811106 dockerd[1646]: time="2025-01-29T11:13:44.811078472Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:13:44.811491 dockerd[1646]: time="2025-01-29T11:13:44.811177536Z" level=info msg="Daemon has completed initialization" Jan 29 11:13:44.837989 dockerd[1646]: time="2025-01-29T11:13:44.837932836Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:13:44.838208 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:13:45.436479 containerd[1449]: time="2025-01-29T11:13:45.436438158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:13:46.227037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189703305.mount: Deactivated successfully. Jan 29 11:13:47.438356 containerd[1449]: time="2025-01-29T11:13:47.438314391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:47.439378 containerd[1449]: time="2025-01-29T11:13:47.439308413Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 29 11:13:47.439918 containerd[1449]: time="2025-01-29T11:13:47.439885312Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:47.443177 containerd[1449]: time="2025-01-29T11:13:47.443144724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:47.444587 containerd[1449]: time="2025-01-29T11:13:47.444479140Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.007989901s" Jan 29 11:13:47.444587 containerd[1449]: time="2025-01-29T11:13:47.444515191Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 11:13:47.462725 containerd[1449]: time="2025-01-29T11:13:47.462700027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:13:48.153510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:13:48.164805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:48.260448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:48.263890 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:13:48.305224 kubelet[1921]: E0129 11:13:48.305124 1921 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:13:48.308223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:13:48.308368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:13:49.220100 containerd[1449]: time="2025-01-29T11:13:49.220028875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:49.220428 containerd[1449]: time="2025-01-29T11:13:49.220366029Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 29 11:13:49.223618 containerd[1449]: time="2025-01-29T11:13:49.223549445Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:49.226566 containerd[1449]: time="2025-01-29T11:13:49.226504114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:49.227795 containerd[1449]: time="2025-01-29T11:13:49.227754650Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.765009746s" Jan 29 11:13:49.227795 containerd[1449]: time="2025-01-29T11:13:49.227787638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 11:13:49.245379 containerd[1449]: time="2025-01-29T11:13:49.245313823Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:13:50.353533 containerd[1449]: time="2025-01-29T11:13:50.353489672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:50.354687 containerd[1449]: time="2025-01-29T11:13:50.354649472Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 29 11:13:50.355619 containerd[1449]: time="2025-01-29T11:13:50.355575473Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:50.358557 containerd[1449]: time="2025-01-29T11:13:50.358517841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:50.359739 containerd[1449]: time="2025-01-29T11:13:50.359619532Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.114270637s" Jan 29 11:13:50.359739 containerd[1449]: time="2025-01-29T11:13:50.359658084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 11:13:50.377073 containerd[1449]: time="2025-01-29T11:13:50.377034401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:13:51.593509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233580373.mount: Deactivated successfully. Jan 29 11:13:51.891576 containerd[1449]: time="2025-01-29T11:13:51.890757915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:51.891576 containerd[1449]: time="2025-01-29T11:13:51.891443555Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 11:13:51.892395 containerd[1449]: time="2025-01-29T11:13:51.892343653Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:51.894588 containerd[1449]: time="2025-01-29T11:13:51.894546836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:51.895420 containerd[1449]: time="2025-01-29T11:13:51.895148025Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.518071504s" Jan 29 11:13:51.895420 containerd[1449]: time="2025-01-29T11:13:51.895180176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 11:13:51.913523 containerd[1449]: time="2025-01-29T11:13:51.913497220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:13:52.583380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552700029.mount: Deactivated successfully. Jan 29 11:13:53.243697 containerd[1449]: time="2025-01-29T11:13:53.243639193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:53.245439 containerd[1449]: time="2025-01-29T11:13:53.245397390Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:13:53.246586 containerd[1449]: time="2025-01-29T11:13:53.246522036Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:53.250090 containerd[1449]: time="2025-01-29T11:13:53.250043001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:53.250865 containerd[1449]: time="2025-01-29T11:13:53.250836735Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.337307892s" Jan 29 11:13:53.250865 containerd[1449]: time="2025-01-29T11:13:53.250864087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:13:53.269527 containerd[1449]: time="2025-01-29T11:13:53.269498311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:13:53.827145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122993101.mount: Deactivated successfully. Jan 29 11:13:53.832067 containerd[1449]: time="2025-01-29T11:13:53.832021850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:53.832674 containerd[1449]: time="2025-01-29T11:13:53.832458522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 29 11:13:53.833290 containerd[1449]: time="2025-01-29T11:13:53.833254942Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:53.835626 containerd[1449]: time="2025-01-29T11:13:53.835592387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:53.836534 containerd[1449]: time="2025-01-29T11:13:53.836501545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 566.970668ms" Jan 29 11:13:53.836749 containerd[1449]: time="2025-01-29T11:13:53.836641353Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 11:13:53.855392 containerd[1449]: time="2025-01-29T11:13:53.855361043Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:13:54.491497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740281857.mount: Deactivated successfully. Jan 29 11:13:56.358569 containerd[1449]: time="2025-01-29T11:13:56.358444240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:56.359655 containerd[1449]: time="2025-01-29T11:13:56.359605494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 29 11:13:56.360322 containerd[1449]: time="2025-01-29T11:13:56.360286819Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:56.364324 containerd[1449]: time="2025-01-29T11:13:56.364290739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:56.365190 containerd[1449]: time="2025-01-29T11:13:56.365159194Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.509652863s" Jan 29 11:13:56.365394 containerd[1449]: time="2025-01-29T11:13:56.365294434Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 11:13:58.558692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:13:58.568736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:58.666458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:58.670991 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:13:58.732890 kubelet[2154]: E0129 11:13:58.732832 2154 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:13:58.735482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:13:58.735675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:14:01.006627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:01.016825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:01.029475 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Jan 29 11:14:01.029490 systemd[1]: Reloading... Jan 29 11:14:01.096591 zram_generator::config[2208]: No configuration found. Jan 29 11:14:01.315066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:14:01.367418 systemd[1]: Reloading finished in 337 ms. Jan 29 11:14:01.404292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:01.406627 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:14:01.406816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:01.408291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:01.505972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:01.509565 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:14:01.549706 kubelet[2255]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:01.549706 kubelet[2255]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:14:01.549706 kubelet[2255]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:01.550070 kubelet[2255]: I0129 11:14:01.549796 2255 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:14:02.031443 kubelet[2255]: I0129 11:14:02.031392 2255 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:14:02.031443 kubelet[2255]: I0129 11:14:02.031421 2255 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:14:02.031688 kubelet[2255]: I0129 11:14:02.031668 2255 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:14:02.055190 kubelet[2255]: I0129 11:14:02.055049 2255 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:14:02.055821 kubelet[2255]: E0129 11:14:02.055649 2255 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.066462 kubelet[2255]: I0129 11:14:02.066424 2255 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:14:02.068553 kubelet[2255]: I0129 11:14:02.067723 2255 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:14:02.068553 kubelet[2255]: I0129 11:14:02.067765 2255 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:14:02.068553 kubelet[2255]: I0129 11:14:02.068029 2255 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:14:02.068553 kubelet[2255]: I0129 11:14:02.068037 2255 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:14:02.068553 kubelet[2255]: I0129 11:14:02.068283 2255 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:02.069330 kubelet[2255]: I0129 11:14:02.069310 2255 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:14:02.069399 kubelet[2255]: I0129 11:14:02.069389 2255 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:14:02.069729 kubelet[2255]: I0129 11:14:02.069717 2255 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:14:02.069919 kubelet[2255]: I0129 11:14:02.069896 2255 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:14:02.070638 kubelet[2255]: W0129 11:14:02.070580 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.070638 kubelet[2255]: E0129 11:14:02.070640 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.070718 kubelet[2255]: W0129 11:14:02.070657 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.070718 kubelet[2255]: E0129 11:14:02.070710 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.071161 kubelet[2255]: I0129 11:14:02.071140 2255 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:14:02.071599 kubelet[2255]: I0129 11:14:02.071583 2255 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:14:02.071764 kubelet[2255]: W0129 11:14:02.071752 2255 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:14:02.072776 kubelet[2255]: I0129 11:14:02.072757 2255 server.go:1264] "Started kubelet" Jan 29 11:14:02.074026 kubelet[2255]: I0129 11:14:02.074003 2255 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:14:02.075577 kubelet[2255]: E0129 11:14:02.075377 2255 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2586cc5c1ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:14:02.072735738 +0000 UTC m=+0.560067861,LastTimestamp:2025-01-29 11:14:02.072735738 +0000 UTC m=+0.560067861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:14:02.075676 kubelet[2255]: I0129 11:14:02.075647 2255 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:14:02.076417 kubelet[2255]: I0129 11:14:02.076343 2255 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:14:02.076668 kubelet[2255]: I0129 11:14:02.076641 2255 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:14:02.077903 kubelet[2255]: I0129 11:14:02.077868 2255 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:14:02.077999 kubelet[2255]: I0129 11:14:02.077981 2255 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:14:02.078982 kubelet[2255]: I0129 11:14:02.078937 2255 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:14:02.079288 kubelet[2255]: W0129 11:14:02.079231 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.079288 kubelet[2255]: E0129 11:14:02.079288 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.080558 kubelet[2255]: E0129 11:14:02.080029 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Jan 29 11:14:02.080558 kubelet[2255]: I0129 11:14:02.080309 2255 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:14:02.080558 kubelet[2255]: I0129 11:14:02.080387 2255 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:14:02.081099 kubelet[2255]: E0129 11:14:02.081066 2255 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:14:02.081211 kubelet[2255]: I0129 11:14:02.081190 2255 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:14:02.081333 kubelet[2255]: I0129 11:14:02.081309 2255 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:14:02.091251 kubelet[2255]: I0129 11:14:02.091203 2255 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:14:02.093215 kubelet[2255]: I0129 11:14:02.093183 2255 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:14:02.093614 kubelet[2255]: I0129 11:14:02.093594 2255 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:14:02.093678 kubelet[2255]: I0129 11:14:02.093662 2255 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:14:02.093737 kubelet[2255]: E0129 11:14:02.093718 2255 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:14:02.094242 kubelet[2255]: W0129 11:14:02.094196 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.094271 kubelet[2255]: E0129 11:14:02.094253 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:02.094464 kubelet[2255]: I0129 11:14:02.094441 2255 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:14:02.094464 kubelet[2255]: I0129 11:14:02.094463 2255 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:14:02.094529 kubelet[2255]: I0129 11:14:02.094507 2255 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:02.178931 kubelet[2255]: I0129 11:14:02.178882 2255 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:14:02.180980 kubelet[2255]: E0129 11:14:02.180946 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jan 29 11:14:02.182948 kubelet[2255]: I0129 11:14:02.182909 2255 policy_none.go:49] "None policy: Start" Jan 29 11:14:02.183949 kubelet[2255]: I0129 11:14:02.183495 2255 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:14:02.183949 kubelet[2255]: I0129 11:14:02.183516 2255 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:14:02.189072 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:14:02.194200 kubelet[2255]: E0129 11:14:02.194163 2255 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:14:02.202448 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:14:02.205403 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:14:02.225797 kubelet[2255]: I0129 11:14:02.225523 2255 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:14:02.226801 kubelet[2255]: I0129 11:14:02.226120 2255 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:14:02.226801 kubelet[2255]: I0129 11:14:02.226225 2255 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:14:02.228337 kubelet[2255]: E0129 11:14:02.228252 2255 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:14:02.280837 kubelet[2255]: E0129 11:14:02.280787 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Jan 29 11:14:02.382772 kubelet[2255]: I0129 11:14:02.382305 2255 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:14:02.382772 kubelet[2255]: E0129 11:14:02.382649 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jan 29 11:14:02.395609 kubelet[2255]: I0129 11:14:02.395307 2255 topology_manager.go:215] "Topology Admit Handler" podUID="a52b871d9550f36b014eed6b1be99683" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:14:02.396394 kubelet[2255]: I0129 11:14:02.396361 2255 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:14:02.399403 kubelet[2255]: I0129 11:14:02.399347 2255 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:14:02.406074 systemd[1]: Created slice kubepods-burstable-poda52b871d9550f36b014eed6b1be99683.slice - libcontainer container kubepods-burstable-poda52b871d9550f36b014eed6b1be99683.slice. Jan 29 11:14:02.430224 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 11:14:02.443914 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 11:14:02.483876 kubelet[2255]: I0129 11:14:02.483765 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:02.483876 kubelet[2255]: I0129 11:14:02.483822 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:02.483876 kubelet[2255]: I0129 11:14:02.483849 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:02.483876 kubelet[2255]: I0129 11:14:02.483880 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:02.484163 kubelet[2255]: I0129 11:14:02.483896 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:14:02.484163 kubelet[2255]: I0129 11:14:02.483933 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a52b871d9550f36b014eed6b1be99683-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a52b871d9550f36b014eed6b1be99683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:02.484163 kubelet[2255]: I0129 11:14:02.483983 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a52b871d9550f36b014eed6b1be99683-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a52b871d9550f36b014eed6b1be99683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:02.484322 kubelet[2255]: I0129 11:14:02.484256 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a52b871d9550f36b014eed6b1be99683-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a52b871d9550f36b014eed6b1be99683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:02.484322 kubelet[2255]: I0129 11:14:02.484283 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:02.681601 kubelet[2255]: E0129 11:14:02.681458 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Jan 29 11:14:02.726061 kubelet[2255]: E0129 11:14:02.726016 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:02.726867 containerd[1449]: time="2025-01-29T11:14:02.726828101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a52b871d9550f36b014eed6b1be99683,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:02.742273 kubelet[2255]: E0129 11:14:02.741910 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:02.742407 containerd[1449]: time="2025-01-29T11:14:02.742362001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:02.746371 kubelet[2255]: E0129 11:14:02.746332 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:02.746743 containerd[1449]: time="2025-01-29T11:14:02.746715019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:02.783831 kubelet[2255]: I0129 11:14:02.783782 2255 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:14:02.784161 kubelet[2255]: E0129 11:14:02.784123 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jan 29 11:14:03.266834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223960924.mount: Deactivated successfully. Jan 29 11:14:03.271957 containerd[1449]: time="2025-01-29T11:14:03.271913614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:03.274179 containerd[1449]: time="2025-01-29T11:14:03.274137520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:14:03.275077 containerd[1449]: time="2025-01-29T11:14:03.275043790Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:03.281605 containerd[1449]: time="2025-01-29T11:14:03.280164950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:03.281605 containerd[1449]: time="2025-01-29T11:14:03.280946253Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:03.281706 containerd[1449]: time="2025-01-29T11:14:03.281657908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:03.282194 containerd[1449]: time="2025-01-29T11:14:03.282091689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:14:03.283189 containerd[1449]: time="2025-01-29T11:14:03.282833485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.924881ms" Jan 29 11:14:03.283189 containerd[1449]: time="2025-01-29T11:14:03.282880437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:14:03.286700 containerd[1449]: time="2025-01-29T11:14:03.286669431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.891283ms" Jan 29 11:14:03.294250 containerd[1449]: time="2025-01-29T11:14:03.294214276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.775615ms" Jan 29 11:14:03.404575 containerd[1449]: time="2025-01-29T11:14:03.404465638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:03.404575 containerd[1449]: time="2025-01-29T11:14:03.404551018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:03.404575 containerd[1449]: time="2025-01-29T11:14:03.404567309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:03.404749 containerd[1449]: time="2025-01-29T11:14:03.404647085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:03.409791 containerd[1449]: time="2025-01-29T11:14:03.408789925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:03.409791 containerd[1449]: time="2025-01-29T11:14:03.408846164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:03.409791 containerd[1449]: time="2025-01-29T11:14:03.408858332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:03.410410 containerd[1449]: time="2025-01-29T11:14:03.410323070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:03.410884 containerd[1449]: time="2025-01-29T11:14:03.410738799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:03.410884 containerd[1449]: time="2025-01-29T11:14:03.410826380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:03.410967 containerd[1449]: time="2025-01-29T11:14:03.410873093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:03.410987 containerd[1449]: time="2025-01-29T11:14:03.410951067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:03.435717 systemd[1]: Started cri-containerd-6b768593ea23f760fceac32f11469df6abf7b9f099907114888554a75140fced.scope - libcontainer container 6b768593ea23f760fceac32f11469df6abf7b9f099907114888554a75140fced. Jan 29 11:14:03.436770 systemd[1]: Started cri-containerd-9968d84855800b3890a0ed86992f33a1c37268df97cc43a78563299c0b6ebc60.scope - libcontainer container 9968d84855800b3890a0ed86992f33a1c37268df97cc43a78563299c0b6ebc60. Jan 29 11:14:03.440633 systemd[1]: Started cri-containerd-5d37ccab7ba9fd187d0f372e88bb2292609954263f820047235b1b7062cf030f.scope - libcontainer container 5d37ccab7ba9fd187d0f372e88bb2292609954263f820047235b1b7062cf030f. Jan 29 11:14:03.447077 kubelet[2255]: W0129 11:14:03.446982 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.447077 kubelet[2255]: E0129 11:14:03.447060 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.475480 containerd[1449]: time="2025-01-29T11:14:03.474863376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a52b871d9550f36b014eed6b1be99683,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b768593ea23f760fceac32f11469df6abf7b9f099907114888554a75140fced\"" Jan 29 11:14:03.476173 kubelet[2255]: E0129 11:14:03.476152 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:03.477762 containerd[1449]: time="2025-01-29T11:14:03.477407505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d37ccab7ba9fd187d0f372e88bb2292609954263f820047235b1b7062cf030f\"" Jan 29 11:14:03.477935 kubelet[2255]: E0129 11:14:03.477915 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:03.479945 containerd[1449]: time="2025-01-29T11:14:03.479910445Z" level=info msg="CreateContainer within sandbox \"6b768593ea23f760fceac32f11469df6abf7b9f099907114888554a75140fced\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:14:03.480405 containerd[1449]: time="2025-01-29T11:14:03.480375968Z" level=info msg="CreateContainer within sandbox \"5d37ccab7ba9fd187d0f372e88bb2292609954263f820047235b1b7062cf030f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:14:03.482703 kubelet[2255]: E0129 11:14:03.482665 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="1.6s" Jan 29 11:14:03.487881 containerd[1449]: time="2025-01-29T11:14:03.487840477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9968d84855800b3890a0ed86992f33a1c37268df97cc43a78563299c0b6ebc60\"" Jan 29 11:14:03.488892 kubelet[2255]: E0129 11:14:03.488854 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:03.490504 containerd[1449]: time="2025-01-29T11:14:03.490476149Z" level=info msg="CreateContainer within sandbox \"9968d84855800b3890a0ed86992f33a1c37268df97cc43a78563299c0b6ebc60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:14:03.499517 containerd[1449]: time="2025-01-29T11:14:03.499461155Z" level=info msg="CreateContainer within sandbox \"6b768593ea23f760fceac32f11469df6abf7b9f099907114888554a75140fced\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f90fd79d7109d4ec14deb6fffba723c6e41ff1310245caf25c003bca57d3420\"" Jan 29 11:14:03.499974 containerd[1449]: time="2025-01-29T11:14:03.499936846Z" level=info msg="StartContainer for \"2f90fd79d7109d4ec14deb6fffba723c6e41ff1310245caf25c003bca57d3420\"" Jan 29 11:14:03.500713 containerd[1449]: time="2025-01-29T11:14:03.500678882Z" level=info msg="CreateContainer within sandbox \"5d37ccab7ba9fd187d0f372e88bb2292609954263f820047235b1b7062cf030f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b7efabb5c616cc703b43398d01370335a9987b96606695ebbc76ba84c045ea8\"" Jan 29 11:14:03.501018 containerd[1449]: time="2025-01-29T11:14:03.500992660Z" level=info msg="StartContainer for \"4b7efabb5c616cc703b43398d01370335a9987b96606695ebbc76ba84c045ea8\"" Jan 29 11:14:03.505545 containerd[1449]: time="2025-01-29T11:14:03.505502555Z" level=info msg="CreateContainer within sandbox \"9968d84855800b3890a0ed86992f33a1c37268df97cc43a78563299c0b6ebc60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d65e94139e1405679c88dc46d068cc9de9b048ffcc1cfeb67b0014bb08347e2b\"" Jan 29 11:14:03.506700 containerd[1449]: time="2025-01-29T11:14:03.506583587Z" level=info msg="StartContainer for \"d65e94139e1405679c88dc46d068cc9de9b048ffcc1cfeb67b0014bb08347e2b\"" Jan 29 11:14:03.521712 systemd[1]: Started cri-containerd-2f90fd79d7109d4ec14deb6fffba723c6e41ff1310245caf25c003bca57d3420.scope - libcontainer container 2f90fd79d7109d4ec14deb6fffba723c6e41ff1310245caf25c003bca57d3420. Jan 29 11:14:03.525549 systemd[1]: Started cri-containerd-4b7efabb5c616cc703b43398d01370335a9987b96606695ebbc76ba84c045ea8.scope - libcontainer container 4b7efabb5c616cc703b43398d01370335a9987b96606695ebbc76ba84c045ea8. Jan 29 11:14:03.531280 systemd[1]: Started cri-containerd-d65e94139e1405679c88dc46d068cc9de9b048ffcc1cfeb67b0014bb08347e2b.scope - libcontainer container d65e94139e1405679c88dc46d068cc9de9b048ffcc1cfeb67b0014bb08347e2b. Jan 29 11:14:03.536994 kubelet[2255]: W0129 11:14:03.536943 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.537110 kubelet[2255]: E0129 11:14:03.537091 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.569794 containerd[1449]: time="2025-01-29T11:14:03.569751739Z" level=info msg="StartContainer for \"2f90fd79d7109d4ec14deb6fffba723c6e41ff1310245caf25c003bca57d3420\" returns successfully" Jan 29 11:14:03.571339 containerd[1449]: time="2025-01-29T11:14:03.571308140Z" level=info msg="StartContainer for \"4b7efabb5c616cc703b43398d01370335a9987b96606695ebbc76ba84c045ea8\" returns successfully" Jan 29 11:14:03.585942 kubelet[2255]: I0129 11:14:03.585913 2255 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:14:03.586390 kubelet[2255]: E0129 11:14:03.586368 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jan 29 11:14:03.595227 containerd[1449]: time="2025-01-29T11:14:03.595198628Z" level=info msg="StartContainer for \"d65e94139e1405679c88dc46d068cc9de9b048ffcc1cfeb67b0014bb08347e2b\" returns successfully" Jan 29 11:14:03.642061 kubelet[2255]: W0129 11:14:03.641985 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.642061 kubelet[2255]: E0129 11:14:03.642042 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.657364 kubelet[2255]: W0129 11:14:03.657285 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:03.657364 kubelet[2255]: E0129 11:14:03.657348 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Jan 29 11:14:04.103323 kubelet[2255]: E0129 11:14:04.103294 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:04.105220 kubelet[2255]: E0129 11:14:04.104811 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:04.106984 kubelet[2255]: E0129 11:14:04.106951 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.088419 kubelet[2255]: E0129 11:14:05.088350 2255 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:14:05.109458 kubelet[2255]: E0129 11:14:05.109413 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.187582 kubelet[2255]: I0129 11:14:05.187524 2255 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:14:05.194984 kubelet[2255]: I0129 11:14:05.194948 2255 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:14:05.201095 kubelet[2255]: E0129 11:14:05.201046 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:05.301169 kubelet[2255]: E0129 11:14:05.301133 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:05.402412 kubelet[2255]: E0129 11:14:05.402099 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:05.502819 kubelet[2255]: E0129 11:14:05.502761 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:05.602921 kubelet[2255]: E0129 11:14:05.602878 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:06.072129 kubelet[2255]: I0129 11:14:06.072081 2255 apiserver.go:52] "Watching apiserver" Jan 29 11:14:06.078817 kubelet[2255]: I0129 11:14:06.078790 2255 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:14:06.123153 kubelet[2255]: E0129 11:14:06.123120 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:06.960572 systemd[1]: Reloading requested from client PID 2536 ('systemctl') (unit session-7.scope)... Jan 29 11:14:06.960586 systemd[1]: Reloading... Jan 29 11:14:07.020598 zram_generator::config[2576]: No configuration found. Jan 29 11:14:07.103295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:14:07.114017 kubelet[2255]: E0129 11:14:07.113977 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:07.168649 systemd[1]: Reloading finished in 207 ms. Jan 29 11:14:07.200020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:07.207885 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:14:07.208069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:07.216843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:07.313724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:07.317472 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:14:07.352969 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:07.352969 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:14:07.352969 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:07.353340 kubelet[2617]: I0129 11:14:07.353005 2617 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:14:07.356606 kubelet[2617]: I0129 11:14:07.356581 2617 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:14:07.356606 kubelet[2617]: I0129 11:14:07.356604 2617 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:14:07.356839 kubelet[2617]: I0129 11:14:07.356825 2617 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:14:07.359338 kubelet[2617]: I0129 11:14:07.359295 2617 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:14:07.360897 kubelet[2617]: I0129 11:14:07.360859 2617 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:14:07.370887 kubelet[2617]: I0129 11:14:07.368125 2617 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:14:07.370887 kubelet[2617]: I0129 11:14:07.368333 2617 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:14:07.370887 kubelet[2617]: I0129 11:14:07.368353 2617 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:14:07.370887 kubelet[2617]: I0129 11:14:07.368722 2617 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:14:07.371107 kubelet[2617]: I0129 11:14:07.368732 2617 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:14:07.371107 kubelet[2617]: I0129 11:14:07.368769 2617 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:07.371107 kubelet[2617]: I0129 11:14:07.368853 2617 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:14:07.371107 kubelet[2617]: I0129 11:14:07.368867 2617 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:14:07.371107 kubelet[2617]: I0129 11:14:07.368897 2617 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:14:07.371107 kubelet[2617]: I0129 11:14:07.368912 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:14:07.371601 kubelet[2617]: I0129 11:14:07.371577 2617 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:14:07.371776 kubelet[2617]: I0129 11:14:07.371735 2617 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:14:07.372676 kubelet[2617]: I0129 11:14:07.372647 2617 server.go:1264] "Started kubelet" Jan 29 11:14:07.373747 kubelet[2617]: I0129 11:14:07.373610 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:14:07.373998 kubelet[2617]: I0129 11:14:07.373962 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:14:07.374321 kubelet[2617]: I0129 11:14:07.374302 2617 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:14:07.378989 kubelet[2617]: I0129 11:14:07.378963 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:14:07.381549 kubelet[2617]: I0129 11:14:07.380513 2617 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:14:07.381894 kubelet[2617]: I0129 11:14:07.381863 2617 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:14:07.382050 kubelet[2617]: I0129 11:14:07.382029 2617 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:14:07.383562 kubelet[2617]: I0129 11:14:07.382342 2617 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:14:07.385748 kubelet[2617]: I0129 11:14:07.383756 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:14:07.387190 kubelet[2617]: I0129 11:14:07.387147 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:14:07.388055 kubelet[2617]: I0129 11:14:07.388025 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:14:07.388108 kubelet[2617]: I0129 11:14:07.388061 2617 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:14:07.388108 kubelet[2617]: I0129 11:14:07.388076 2617 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:14:07.388146 kubelet[2617]: E0129 11:14:07.388112 2617 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:14:07.400817 kubelet[2617]: I0129 11:14:07.400782 2617 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:14:07.400817 kubelet[2617]: I0129 11:14:07.400802 2617 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:14:07.401028 kubelet[2617]: E0129 11:14:07.400996 2617 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:14:07.425571 kubelet[2617]: I0129 11:14:07.425486 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:14:07.425571 kubelet[2617]: I0129 11:14:07.425502 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:14:07.425571 kubelet[2617]: I0129 11:14:07.425519 2617 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:07.425714 kubelet[2617]: I0129 11:14:07.425671 2617 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:14:07.425714 kubelet[2617]: I0129 11:14:07.425694 2617 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:14:07.425714 kubelet[2617]: I0129 11:14:07.425711 2617 policy_none.go:49] "None policy: Start" Jan 29 11:14:07.426332 kubelet[2617]: I0129 11:14:07.426314 2617 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:14:07.426390 kubelet[2617]: I0129 11:14:07.426338 2617 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:14:07.426494 kubelet[2617]: I0129 11:14:07.426478 2617 state_mem.go:75] "Updated machine memory state" Jan 29 11:14:07.430022 kubelet[2617]: I0129 11:14:07.429992 2617 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:14:07.430190 kubelet[2617]: I0129 11:14:07.430148 2617 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:14:07.430270 kubelet[2617]: I0129 11:14:07.430253 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:14:07.484745 kubelet[2617]: I0129 11:14:07.484655 2617 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:14:07.489306 kubelet[2617]: I0129 11:14:07.488440 2617 topology_manager.go:215] "Topology Admit Handler" podUID="a52b871d9550f36b014eed6b1be99683" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:14:07.489306 kubelet[2617]: I0129 11:14:07.488613 2617 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:14:07.489306 kubelet[2617]: I0129 11:14:07.488675 2617 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:14:07.492092 kubelet[2617]: I0129 11:14:07.492046 2617 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:14:07.492154 kubelet[2617]: I0129 11:14:07.492114 2617 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:14:07.494587 kubelet[2617]: E0129 11:14:07.494395 2617 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:07.583824 kubelet[2617]: I0129 11:14:07.583787 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a52b871d9550f36b014eed6b1be99683-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a52b871d9550f36b014eed6b1be99683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:07.583953 kubelet[2617]: I0129 11:14:07.583837 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a52b871d9550f36b014eed6b1be99683-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a52b871d9550f36b014eed6b1be99683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:07.583953 kubelet[2617]: I0129 11:14:07.583867 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:07.583953 kubelet[2617]: I0129 11:14:07.583888 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:07.583953 kubelet[2617]: I0129 11:14:07.583903 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:07.583953 kubelet[2617]: I0129 11:14:07.583942 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:14:07.584064 kubelet[2617]: I0129 11:14:07.583956 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a52b871d9550f36b014eed6b1be99683-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a52b871d9550f36b014eed6b1be99683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:07.584064 kubelet[2617]: I0129 11:14:07.583989 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:07.584064 kubelet[2617]: I0129 11:14:07.584027 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:07.794082 kubelet[2617]: E0129 11:14:07.793954 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:07.795332 kubelet[2617]: E0129 11:14:07.795246 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:07.795693 kubelet[2617]: E0129 11:14:07.795656 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.369611 kubelet[2617]: I0129 11:14:08.369573 2617 apiserver.go:52] "Watching apiserver" Jan 29 11:14:08.382809 kubelet[2617]: I0129 11:14:08.382756 2617 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:14:08.414580 kubelet[2617]: E0129 11:14:08.413182 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.414580 kubelet[2617]: E0129 11:14:08.413874 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.417694 kubelet[2617]: E0129 11:14:08.417662 2617 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:08.418485 kubelet[2617]: E0129 11:14:08.418468 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.445279 kubelet[2617]: I0129 11:14:08.445219 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.445202109 podStartE2EDuration="1.445202109s" podCreationTimestamp="2025-01-29 11:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:08.435413785 +0000 UTC m=+1.114982574" watchObservedRunningTime="2025-01-29 11:14:08.445202109 +0000 UTC m=+1.124770897" Jan 29 11:14:08.445513 kubelet[2617]: I0129 11:14:08.445484 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.445476839 podStartE2EDuration="2.445476839s" podCreationTimestamp="2025-01-29 11:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:08.443722024 +0000 UTC m=+1.123290772" watchObservedRunningTime="2025-01-29 11:14:08.445476839 +0000 UTC m=+1.125045627" Jan 29 11:14:09.412134 kubelet[2617]: E0129 11:14:09.412100 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:11.021710 kubelet[2617]: E0129 11:14:11.021675 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:11.969212 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 29 11:14:11.970595 sshd[1625]: Connection closed by 10.0.0.1 port 51004 Jan 29 11:14:11.971095 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:11.974725 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:51004.service: Deactivated successfully. Jan 29 11:14:11.976854 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:14:11.977063 systemd[1]: session-7.scope: Consumed 6.774s CPU time, 192.5M memory peak, 0B memory swap peak. Jan 29 11:14:11.978030 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:14:11.979111 systemd-logind[1426]: Removed session 7. Jan 29 11:14:15.079564 kubelet[2617]: E0129 11:14:15.079513 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:15.092721 kubelet[2617]: I0129 11:14:15.092599 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.09253207 podStartE2EDuration="8.09253207s" podCreationTimestamp="2025-01-29 11:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:08.456861324 +0000 UTC m=+1.136430112" watchObservedRunningTime="2025-01-29 11:14:15.09253207 +0000 UTC m=+7.772100858" Jan 29 11:14:15.420572 kubelet[2617]: E0129 11:14:15.420456 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:16.652443 kubelet[2617]: E0129 11:14:16.652405 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:17.421875 kubelet[2617]: E0129 11:14:17.421843 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:21.034622 kubelet[2617]: E0129 11:14:21.034264 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:21.134048 update_engine[1427]: I20250129 11:14:21.133970 1427 update_attempter.cc:509] Updating boot flags... Jan 29 11:14:21.167566 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2717) Jan 29 11:14:21.208570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2715) Jan 29 11:14:23.016422 kubelet[2617]: I0129 11:14:23.016356 2617 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:14:23.028315 containerd[1449]: time="2025-01-29T11:14:23.028250369Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:14:23.029938 kubelet[2617]: I0129 11:14:23.028578 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:14:23.669214 kubelet[2617]: I0129 11:14:23.669168 2617 topology_manager.go:215] "Topology Admit Handler" podUID="87da6278-9bf4-427d-a177-71ced9a105ff" podNamespace="kube-system" podName="kube-proxy-wgtsr" Jan 29 11:14:23.679277 systemd[1]: Created slice kubepods-besteffort-pod87da6278_9bf4_427d_a177_71ced9a105ff.slice - libcontainer container kubepods-besteffort-pod87da6278_9bf4_427d_a177_71ced9a105ff.slice. Jan 29 11:14:23.694456 kubelet[2617]: I0129 11:14:23.694422 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87da6278-9bf4-427d-a177-71ced9a105ff-kube-proxy\") pod \"kube-proxy-wgtsr\" (UID: \"87da6278-9bf4-427d-a177-71ced9a105ff\") " pod="kube-system/kube-proxy-wgtsr" Jan 29 11:14:23.694456 kubelet[2617]: I0129 11:14:23.694459 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87da6278-9bf4-427d-a177-71ced9a105ff-lib-modules\") pod \"kube-proxy-wgtsr\" (UID: \"87da6278-9bf4-427d-a177-71ced9a105ff\") " pod="kube-system/kube-proxy-wgtsr" Jan 29 11:14:23.694623 kubelet[2617]: I0129 11:14:23.694483 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9h6k\" (UniqueName: \"kubernetes.io/projected/87da6278-9bf4-427d-a177-71ced9a105ff-kube-api-access-r9h6k\") pod \"kube-proxy-wgtsr\" (UID: \"87da6278-9bf4-427d-a177-71ced9a105ff\") " pod="kube-system/kube-proxy-wgtsr" Jan 29 11:14:23.694623 kubelet[2617]: I0129 11:14:23.694504 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87da6278-9bf4-427d-a177-71ced9a105ff-xtables-lock\") pod \"kube-proxy-wgtsr\" (UID: \"87da6278-9bf4-427d-a177-71ced9a105ff\") " pod="kube-system/kube-proxy-wgtsr" Jan 29 11:14:23.763184 kubelet[2617]: I0129 11:14:23.763139 2617 topology_manager.go:215] "Topology Admit Handler" podUID="fc34c05a-a24c-45df-99b4-54fe2b9948c8" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-g7xll" Jan 29 11:14:23.775594 systemd[1]: Created slice kubepods-besteffort-podfc34c05a_a24c_45df_99b4_54fe2b9948c8.slice - libcontainer container kubepods-besteffort-podfc34c05a_a24c_45df_99b4_54fe2b9948c8.slice. Jan 29 11:14:23.796287 kubelet[2617]: I0129 11:14:23.795715 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8r9p\" (UniqueName: \"kubernetes.io/projected/fc34c05a-a24c-45df-99b4-54fe2b9948c8-kube-api-access-x8r9p\") pod \"tigera-operator-7bc55997bb-g7xll\" (UID: \"fc34c05a-a24c-45df-99b4-54fe2b9948c8\") " pod="tigera-operator/tigera-operator-7bc55997bb-g7xll" Jan 29 11:14:23.796287 kubelet[2617]: I0129 11:14:23.795757 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc34c05a-a24c-45df-99b4-54fe2b9948c8-var-lib-calico\") pod \"tigera-operator-7bc55997bb-g7xll\" (UID: \"fc34c05a-a24c-45df-99b4-54fe2b9948c8\") " pod="tigera-operator/tigera-operator-7bc55997bb-g7xll" Jan 29 11:14:23.991240 kubelet[2617]: E0129 11:14:23.990952 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:23.996706 containerd[1449]: time="2025-01-29T11:14:23.996656002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgtsr,Uid:87da6278-9bf4-427d-a177-71ced9a105ff,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:24.027660 containerd[1449]: time="2025-01-29T11:14:24.027183549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:24.027660 containerd[1449]: time="2025-01-29T11:14:24.027629691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:24.027660 containerd[1449]: time="2025-01-29T11:14:24.027644213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:24.027862 containerd[1449]: time="2025-01-29T11:14:24.027728065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:24.050722 systemd[1]: Started cri-containerd-f4bc41a6ce153c96f1c65c598469444ad52a347a110989071542d47970c232c3.scope - libcontainer container f4bc41a6ce153c96f1c65c598469444ad52a347a110989071542d47970c232c3. Jan 29 11:14:24.076039 containerd[1449]: time="2025-01-29T11:14:24.075911369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgtsr,Uid:87da6278-9bf4-427d-a177-71ced9a105ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4bc41a6ce153c96f1c65c598469444ad52a347a110989071542d47970c232c3\"" Jan 29 11:14:24.078242 containerd[1449]: time="2025-01-29T11:14:24.078210491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-g7xll,Uid:fc34c05a-a24c-45df-99b4-54fe2b9948c8,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:14:24.079623 kubelet[2617]: E0129 11:14:24.079576 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:24.084270 containerd[1449]: time="2025-01-29T11:14:24.084002062Z" level=info msg="CreateContainer within sandbox \"f4bc41a6ce153c96f1c65c598469444ad52a347a110989071542d47970c232c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:14:24.116716 containerd[1449]: time="2025-01-29T11:14:24.116655872Z" level=info msg="CreateContainer within sandbox \"f4bc41a6ce153c96f1c65c598469444ad52a347a110989071542d47970c232c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b83c63adb0021406e24ff4fcd886f1471b89ab4b61fe13df3f6a3122b8fc5268\"" Jan 29 11:14:24.118582 containerd[1449]: time="2025-01-29T11:14:24.118186606Z" level=info msg="StartContainer for \"b83c63adb0021406e24ff4fcd886f1471b89ab4b61fe13df3f6a3122b8fc5268\"" Jan 29 11:14:24.148382 containerd[1449]: time="2025-01-29T11:14:24.148244093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:24.148382 containerd[1449]: time="2025-01-29T11:14:24.148336426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:24.148382 containerd[1449]: time="2025-01-29T11:14:24.148351629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:24.149214 containerd[1449]: time="2025-01-29T11:14:24.148945112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:24.156251 systemd[1]: Started cri-containerd-b83c63adb0021406e24ff4fcd886f1471b89ab4b61fe13df3f6a3122b8fc5268.scope - libcontainer container b83c63adb0021406e24ff4fcd886f1471b89ab4b61fe13df3f6a3122b8fc5268. Jan 29 11:14:24.160270 systemd[1]: Started cri-containerd-89b2447e0f2c379e0eda01842dca7d6ce8e21952e88f5ef9a8ab0276754ed741.scope - libcontainer container 89b2447e0f2c379e0eda01842dca7d6ce8e21952e88f5ef9a8ab0276754ed741. Jan 29 11:14:24.189874 containerd[1449]: time="2025-01-29T11:14:24.189828394Z" level=info msg="StartContainer for \"b83c63adb0021406e24ff4fcd886f1471b89ab4b61fe13df3f6a3122b8fc5268\" returns successfully" Jan 29 11:14:24.203974 containerd[1449]: time="2025-01-29T11:14:24.203934808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-g7xll,Uid:fc34c05a-a24c-45df-99b4-54fe2b9948c8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89b2447e0f2c379e0eda01842dca7d6ce8e21952e88f5ef9a8ab0276754ed741\"" Jan 29 11:14:24.209573 containerd[1449]: time="2025-01-29T11:14:24.206823693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:14:24.434485 kubelet[2617]: E0129 11:14:24.434459 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:25.281056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446602193.mount: Deactivated successfully. Jan 29 11:14:25.879078 containerd[1449]: time="2025-01-29T11:14:25.879032184Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:25.880251 containerd[1449]: time="2025-01-29T11:14:25.880217702Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 11:14:25.881247 containerd[1449]: time="2025-01-29T11:14:25.881212515Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:25.883846 containerd[1449]: time="2025-01-29T11:14:25.883789899Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:25.884973 containerd[1449]: time="2025-01-29T11:14:25.884420103Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.676601831s" Jan 29 11:14:25.884973 containerd[1449]: time="2025-01-29T11:14:25.884452147Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 11:14:25.905198 containerd[1449]: time="2025-01-29T11:14:25.905170233Z" level=info msg="CreateContainer within sandbox \"89b2447e0f2c379e0eda01842dca7d6ce8e21952e88f5ef9a8ab0276754ed741\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:14:25.918116 containerd[1449]: time="2025-01-29T11:14:25.918074836Z" level=info msg="CreateContainer within sandbox \"89b2447e0f2c379e0eda01842dca7d6ce8e21952e88f5ef9a8ab0276754ed741\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"374b261c047a7dfe17ae2f4277daf76264a5100c7750a10d874b169d98704b81\"" Jan 29 11:14:25.920235 containerd[1449]: time="2025-01-29T11:14:25.920082464Z" level=info msg="StartContainer for \"374b261c047a7dfe17ae2f4277daf76264a5100c7750a10d874b169d98704b81\"" Jan 29 11:14:25.954763 systemd[1]: Started cri-containerd-374b261c047a7dfe17ae2f4277daf76264a5100c7750a10d874b169d98704b81.scope - libcontainer container 374b261c047a7dfe17ae2f4277daf76264a5100c7750a10d874b169d98704b81. Jan 29 11:14:25.988711 containerd[1449]: time="2025-01-29T11:14:25.988647097Z" level=info msg="StartContainer for \"374b261c047a7dfe17ae2f4277daf76264a5100c7750a10d874b169d98704b81\" returns successfully" Jan 29 11:14:26.452324 kubelet[2617]: I0129 11:14:26.452269 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wgtsr" podStartSLOduration=3.450804222 podStartE2EDuration="3.450804222s" podCreationTimestamp="2025-01-29 11:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:24.445921759 +0000 UTC m=+17.125490667" watchObservedRunningTime="2025-01-29 11:14:26.450804222 +0000 UTC m=+19.130373010" Jan 29 11:14:26.452741 kubelet[2617]: I0129 11:14:26.452387 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-g7xll" podStartSLOduration=1.759428679 podStartE2EDuration="3.452378383s" podCreationTimestamp="2025-01-29 11:14:23 +0000 UTC" firstStartedPulling="2025-01-29 11:14:24.206192164 +0000 UTC m=+16.885760952" lastFinishedPulling="2025-01-29 11:14:25.899141868 +0000 UTC m=+18.578710656" observedRunningTime="2025-01-29 11:14:26.450350084 +0000 UTC m=+19.129918872" watchObservedRunningTime="2025-01-29 11:14:26.452378383 +0000 UTC m=+19.131947171" Jan 29 11:14:29.721085 kubelet[2617]: I0129 11:14:29.721021 2617 topology_manager.go:215] "Topology Admit Handler" podUID="51ac6724-1216-4f4a-9d6a-64124dc6c702" podNamespace="calico-system" podName="calico-typha-669db5f494-9qrbq" Jan 29 11:14:29.731864 systemd[1]: Created slice kubepods-besteffort-pod51ac6724_1216_4f4a_9d6a_64124dc6c702.slice - libcontainer container kubepods-besteffort-pod51ac6724_1216_4f4a_9d6a_64124dc6c702.slice. Jan 29 11:14:29.737463 kubelet[2617]: I0129 11:14:29.737415 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51ac6724-1216-4f4a-9d6a-64124dc6c702-tigera-ca-bundle\") pod \"calico-typha-669db5f494-9qrbq\" (UID: \"51ac6724-1216-4f4a-9d6a-64124dc6c702\") " pod="calico-system/calico-typha-669db5f494-9qrbq" Jan 29 11:14:29.737463 kubelet[2617]: I0129 11:14:29.737460 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fms88\" (UniqueName: \"kubernetes.io/projected/51ac6724-1216-4f4a-9d6a-64124dc6c702-kube-api-access-fms88\") pod \"calico-typha-669db5f494-9qrbq\" (UID: \"51ac6724-1216-4f4a-9d6a-64124dc6c702\") " pod="calico-system/calico-typha-669db5f494-9qrbq" Jan 29 11:14:29.737621 kubelet[2617]: I0129 11:14:29.737487 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/51ac6724-1216-4f4a-9d6a-64124dc6c702-typha-certs\") pod \"calico-typha-669db5f494-9qrbq\" (UID: \"51ac6724-1216-4f4a-9d6a-64124dc6c702\") " pod="calico-system/calico-typha-669db5f494-9qrbq" Jan 29 11:14:29.774513 kubelet[2617]: I0129 11:14:29.774460 2617 topology_manager.go:215] "Topology Admit Handler" podUID="5eb67317-8e1e-4a42-afa7-bad4d9c90d00" podNamespace="calico-system" podName="calico-node-9dnh8" Jan 29 11:14:29.782514 systemd[1]: Created slice kubepods-besteffort-pod5eb67317_8e1e_4a42_afa7_bad4d9c90d00.slice - libcontainer container kubepods-besteffort-pod5eb67317_8e1e_4a42_afa7_bad4d9c90d00.slice. Jan 29 11:14:29.838115 kubelet[2617]: I0129 11:14:29.838071 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-cni-net-dir\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838115 kubelet[2617]: I0129 11:14:29.838114 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-xtables-lock\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838278 kubelet[2617]: I0129 11:14:29.838149 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-var-lib-calico\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838278 kubelet[2617]: I0129 11:14:29.838175 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-cni-bin-dir\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838278 kubelet[2617]: I0129 11:14:29.838196 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-flexvol-driver-host\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838278 kubelet[2617]: I0129 11:14:29.838238 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47sml\" (UniqueName: \"kubernetes.io/projected/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-kube-api-access-47sml\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838372 kubelet[2617]: I0129 11:14:29.838284 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-tigera-ca-bundle\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838372 kubelet[2617]: I0129 11:14:29.838314 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-lib-modules\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838413 kubelet[2617]: I0129 11:14:29.838402 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-node-certs\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838438 kubelet[2617]: I0129 11:14:29.838420 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-var-run-calico\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838461 kubelet[2617]: I0129 11:14:29.838454 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-cni-log-dir\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.838521 kubelet[2617]: I0129 11:14:29.838492 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5eb67317-8e1e-4a42-afa7-bad4d9c90d00-policysync\") pod \"calico-node-9dnh8\" (UID: \"5eb67317-8e1e-4a42-afa7-bad4d9c90d00\") " pod="calico-system/calico-node-9dnh8" Jan 29 11:14:29.887619 kubelet[2617]: I0129 11:14:29.886974 2617 topology_manager.go:215] "Topology Admit Handler" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" podNamespace="calico-system" podName="csi-node-driver-qjm8h" Jan 29 11:14:29.887619 kubelet[2617]: E0129 11:14:29.887270 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:29.939545 kubelet[2617]: I0129 11:14:29.939491 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6gcj\" (UniqueName: \"kubernetes.io/projected/51246db2-a0a0-40ce-bf4c-e10522a304db-kube-api-access-c6gcj\") pod \"csi-node-driver-qjm8h\" (UID: \"51246db2-a0a0-40ce-bf4c-e10522a304db\") " pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:29.940008 kubelet[2617]: I0129 11:14:29.939577 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/51246db2-a0a0-40ce-bf4c-e10522a304db-varrun\") pod \"csi-node-driver-qjm8h\" (UID: \"51246db2-a0a0-40ce-bf4c-e10522a304db\") " pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:29.940008 kubelet[2617]: I0129 11:14:29.939627 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51246db2-a0a0-40ce-bf4c-e10522a304db-kubelet-dir\") pod \"csi-node-driver-qjm8h\" (UID: \"51246db2-a0a0-40ce-bf4c-e10522a304db\") " pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:29.940008 kubelet[2617]: I0129 11:14:29.939669 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51246db2-a0a0-40ce-bf4c-e10522a304db-socket-dir\") pod \"csi-node-driver-qjm8h\" (UID: \"51246db2-a0a0-40ce-bf4c-e10522a304db\") " pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:29.940008 kubelet[2617]: I0129 11:14:29.939829 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51246db2-a0a0-40ce-bf4c-e10522a304db-registration-dir\") pod \"csi-node-driver-qjm8h\" (UID: \"51246db2-a0a0-40ce-bf4c-e10522a304db\") " pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:29.943623 kubelet[2617]: E0129 11:14:29.942137 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:29.943623 kubelet[2617]: W0129 11:14:29.942157 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:29.943623 kubelet[2617]: E0129 11:14:29.942174 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:29.944173 kubelet[2617]: E0129 11:14:29.944144 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:29.944249 kubelet[2617]: W0129 11:14:29.944213 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:29.944249 kubelet[2617]: E0129 11:14:29.944230 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:29.953131 kubelet[2617]: E0129 11:14:29.953114 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:29.953131 kubelet[2617]: W0129 11:14:29.953129 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:29.953229 kubelet[2617]: E0129 11:14:29.953142 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.038853 kubelet[2617]: E0129 11:14:30.038747 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:30.040156 containerd[1449]: time="2025-01-29T11:14:30.040116230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-669db5f494-9qrbq,Uid:51ac6724-1216-4f4a-9d6a-64124dc6c702,Namespace:calico-system,Attempt:0,}" Jan 29 11:14:30.040474 kubelet[2617]: E0129 11:14:30.040408 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.040474 kubelet[2617]: W0129 11:14:30.040423 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.040474 kubelet[2617]: E0129 11:14:30.040442 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.040722 kubelet[2617]: E0129 11:14:30.040692 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.040722 kubelet[2617]: W0129 11:14:30.040710 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.040722 kubelet[2617]: E0129 11:14:30.040725 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.040996 kubelet[2617]: E0129 11:14:30.040978 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.041054 kubelet[2617]: W0129 11:14:30.041041 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.041123 kubelet[2617]: E0129 11:14:30.041109 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.041472 kubelet[2617]: E0129 11:14:30.041369 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.041472 kubelet[2617]: W0129 11:14:30.041383 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.041472 kubelet[2617]: E0129 11:14:30.041400 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.041663 kubelet[2617]: E0129 11:14:30.041649 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.041721 kubelet[2617]: W0129 11:14:30.041708 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.041785 kubelet[2617]: E0129 11:14:30.041774 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.042034 kubelet[2617]: E0129 11:14:30.042020 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.042179 kubelet[2617]: W0129 11:14:30.042092 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.042179 kubelet[2617]: E0129 11:14:30.042134 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.042312 kubelet[2617]: E0129 11:14:30.042299 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.042369 kubelet[2617]: W0129 11:14:30.042358 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.042454 kubelet[2617]: E0129 11:14:30.042431 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.042674 kubelet[2617]: E0129 11:14:30.042661 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.042850 kubelet[2617]: W0129 11:14:30.042739 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.042850 kubelet[2617]: E0129 11:14:30.042764 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.042975 kubelet[2617]: E0129 11:14:30.042963 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.043029 kubelet[2617]: W0129 11:14:30.043019 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.043090 kubelet[2617]: E0129 11:14:30.043079 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.043301 kubelet[2617]: E0129 11:14:30.043287 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.043376 kubelet[2617]: W0129 11:14:30.043362 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.043437 kubelet[2617]: E0129 11:14:30.043426 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.043707 kubelet[2617]: E0129 11:14:30.043692 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.043707 kubelet[2617]: W0129 11:14:30.043707 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.043782 kubelet[2617]: E0129 11:14:30.043735 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.043919 kubelet[2617]: E0129 11:14:30.043906 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.043919 kubelet[2617]: W0129 11:14:30.043917 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.043984 kubelet[2617]: E0129 11:14:30.043968 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.044156 kubelet[2617]: E0129 11:14:30.044125 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.044156 kubelet[2617]: W0129 11:14:30.044137 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.044212 kubelet[2617]: E0129 11:14:30.044201 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.044333 kubelet[2617]: E0129 11:14:30.044319 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.044333 kubelet[2617]: W0129 11:14:30.044332 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.044393 kubelet[2617]: E0129 11:14:30.044346 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.044525 kubelet[2617]: E0129 11:14:30.044512 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.044525 kubelet[2617]: W0129 11:14:30.044523 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.044592 kubelet[2617]: E0129 11:14:30.044544 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.044899 kubelet[2617]: E0129 11:14:30.044813 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.044899 kubelet[2617]: W0129 11:14:30.044828 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.044899 kubelet[2617]: E0129 11:14:30.044845 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.045062 kubelet[2617]: E0129 11:14:30.045044 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.045062 kubelet[2617]: W0129 11:14:30.045060 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.045114 kubelet[2617]: E0129 11:14:30.045076 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.045284 kubelet[2617]: E0129 11:14:30.045263 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.045284 kubelet[2617]: W0129 11:14:30.045282 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.045358 kubelet[2617]: E0129 11:14:30.045298 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.045473 kubelet[2617]: E0129 11:14:30.045449 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.045473 kubelet[2617]: W0129 11:14:30.045471 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.045524 kubelet[2617]: E0129 11:14:30.045486 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.045753 kubelet[2617]: E0129 11:14:30.045733 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.045753 kubelet[2617]: W0129 11:14:30.045747 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.045856 kubelet[2617]: E0129 11:14:30.045757 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.045933 kubelet[2617]: E0129 11:14:30.045923 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.045933 kubelet[2617]: W0129 11:14:30.045933 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.045989 kubelet[2617]: E0129 11:14:30.045941 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.046119 kubelet[2617]: E0129 11:14:30.046110 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.046119 kubelet[2617]: W0129 11:14:30.046119 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.046178 kubelet[2617]: E0129 11:14:30.046126 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.046278 kubelet[2617]: E0129 11:14:30.046260 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.046309 kubelet[2617]: W0129 11:14:30.046294 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.046309 kubelet[2617]: E0129 11:14:30.046304 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.046495 kubelet[2617]: E0129 11:14:30.046483 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.046495 kubelet[2617]: W0129 11:14:30.046493 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.046577 kubelet[2617]: E0129 11:14:30.046502 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.056250 kubelet[2617]: E0129 11:14:30.056185 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.056250 kubelet[2617]: W0129 11:14:30.056202 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.056250 kubelet[2617]: E0129 11:14:30.056217 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.070169 kubelet[2617]: E0129 11:14:30.070148 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:30.070169 kubelet[2617]: W0129 11:14:30.070166 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:30.070254 kubelet[2617]: E0129 11:14:30.070181 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:30.085250 kubelet[2617]: E0129 11:14:30.085220 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:30.085735 containerd[1449]: time="2025-01-29T11:14:30.085704175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dnh8,Uid:5eb67317-8e1e-4a42-afa7-bad4d9c90d00,Namespace:calico-system,Attempt:0,}" Jan 29 11:14:30.093820 containerd[1449]: time="2025-01-29T11:14:30.093510808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:30.094223 containerd[1449]: time="2025-01-29T11:14:30.093843844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:30.094223 containerd[1449]: time="2025-01-29T11:14:30.093902370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:30.094223 containerd[1449]: time="2025-01-29T11:14:30.094070148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:30.111843 systemd[1]: Started cri-containerd-f7c9305faf1ee5c50366b04e6aef955b271780fd5b80a983827fb6fd278e7c6c.scope - libcontainer container f7c9305faf1ee5c50366b04e6aef955b271780fd5b80a983827fb6fd278e7c6c. Jan 29 11:14:30.119735 containerd[1449]: time="2025-01-29T11:14:30.119077536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:30.119735 containerd[1449]: time="2025-01-29T11:14:30.119710764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:30.119735 containerd[1449]: time="2025-01-29T11:14:30.119725646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:30.121203 containerd[1449]: time="2025-01-29T11:14:30.121062148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:30.138690 systemd[1]: Started cri-containerd-d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558.scope - libcontainer container d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558. Jan 29 11:14:30.154449 containerd[1449]: time="2025-01-29T11:14:30.154411547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-669db5f494-9qrbq,Uid:51ac6724-1216-4f4a-9d6a-64124dc6c702,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7c9305faf1ee5c50366b04e6aef955b271780fd5b80a983827fb6fd278e7c6c\"" Jan 29 11:14:30.155889 kubelet[2617]: E0129 11:14:30.155852 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:30.157473 containerd[1449]: time="2025-01-29T11:14:30.157430829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:14:30.167952 containerd[1449]: time="2025-01-29T11:14:30.167923509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dnh8,Uid:5eb67317-8e1e-4a42-afa7-bad4d9c90d00,Namespace:calico-system,Attempt:0,} returns sandbox id \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\"" Jan 29 11:14:30.168664 kubelet[2617]: E0129 11:14:30.168644 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:31.080334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780176576.mount: Deactivated successfully. Jan 29 11:14:31.344451 containerd[1449]: time="2025-01-29T11:14:31.344335854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:31.345256 containerd[1449]: time="2025-01-29T11:14:31.345053087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 11:14:31.349371 containerd[1449]: time="2025-01-29T11:14:31.349051336Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:31.357777 containerd[1449]: time="2025-01-29T11:14:31.357736145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:31.358563 containerd[1449]: time="2025-01-29T11:14:31.358365129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.200896456s" Jan 29 11:14:31.358563 containerd[1449]: time="2025-01-29T11:14:31.358422335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 11:14:31.359666 containerd[1449]: time="2025-01-29T11:14:31.359636779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:14:31.369831 containerd[1449]: time="2025-01-29T11:14:31.368982015Z" level=info msg="CreateContainer within sandbox \"f7c9305faf1ee5c50366b04e6aef955b271780fd5b80a983827fb6fd278e7c6c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:14:31.381906 containerd[1449]: time="2025-01-29T11:14:31.381857853Z" level=info msg="CreateContainer within sandbox \"f7c9305faf1ee5c50366b04e6aef955b271780fd5b80a983827fb6fd278e7c6c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e01a5e7449b37d5cf3f95d3a1bbf129ad37fef14111934f40c3ae8118cd50fc\"" Jan 29 11:14:31.382361 containerd[1449]: time="2025-01-29T11:14:31.382339862Z" level=info msg="StartContainer for \"1e01a5e7449b37d5cf3f95d3a1bbf129ad37fef14111934f40c3ae8118cd50fc\"" Jan 29 11:14:31.388829 kubelet[2617]: E0129 11:14:31.388795 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:31.407711 systemd[1]: Started cri-containerd-1e01a5e7449b37d5cf3f95d3a1bbf129ad37fef14111934f40c3ae8118cd50fc.scope - libcontainer container 1e01a5e7449b37d5cf3f95d3a1bbf129ad37fef14111934f40c3ae8118cd50fc. Jan 29 11:14:31.443213 containerd[1449]: time="2025-01-29T11:14:31.443167045Z" level=info msg="StartContainer for \"1e01a5e7449b37d5cf3f95d3a1bbf129ad37fef14111934f40c3ae8118cd50fc\" returns successfully" Jan 29 11:14:31.450029 kubelet[2617]: E0129 11:14:31.449716 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:31.464793 kubelet[2617]: I0129 11:14:31.461522 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-669db5f494-9qrbq" podStartSLOduration=1.259037262 podStartE2EDuration="2.461502561s" podCreationTimestamp="2025-01-29 11:14:29 +0000 UTC" firstStartedPulling="2025-01-29 11:14:30.156950218 +0000 UTC m=+22.836519006" lastFinishedPulling="2025-01-29 11:14:31.359415517 +0000 UTC m=+24.038984305" observedRunningTime="2025-01-29 11:14:31.460722522 +0000 UTC m=+24.140291310" watchObservedRunningTime="2025-01-29 11:14:31.461502561 +0000 UTC m=+24.141071349" Jan 29 11:14:31.552184 kubelet[2617]: E0129 11:14:31.552041 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.552184 kubelet[2617]: W0129 11:14:31.552068 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.552184 kubelet[2617]: E0129 11:14:31.552089 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.552760 kubelet[2617]: E0129 11:14:31.552259 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.552760 kubelet[2617]: W0129 11:14:31.552268 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.552760 kubelet[2617]: E0129 11:14:31.552276 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.552760 kubelet[2617]: E0129 11:14:31.552830 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.552760 kubelet[2617]: W0129 11:14:31.552842 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.553171 kubelet[2617]: E0129 11:14:31.552960 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.553304 kubelet[2617]: E0129 11:14:31.553252 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.553372 kubelet[2617]: W0129 11:14:31.553361 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.553551 kubelet[2617]: E0129 11:14:31.553454 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.554354 kubelet[2617]: E0129 11:14:31.554291 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.554354 kubelet[2617]: W0129 11:14:31.554306 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.554354 kubelet[2617]: E0129 11:14:31.554319 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.556565 kubelet[2617]: E0129 11:14:31.556392 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.556565 kubelet[2617]: W0129 11:14:31.556409 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.556671 kubelet[2617]: E0129 11:14:31.556583 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.557956 kubelet[2617]: E0129 11:14:31.557930 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.558013 kubelet[2617]: W0129 11:14:31.557972 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.558013 kubelet[2617]: E0129 11:14:31.557987 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.559206 kubelet[2617]: E0129 11:14:31.558633 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.559206 kubelet[2617]: W0129 11:14:31.558744 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.559206 kubelet[2617]: E0129 11:14:31.558759 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.559360 kubelet[2617]: E0129 11:14:31.559234 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.559360 kubelet[2617]: W0129 11:14:31.559245 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.559360 kubelet[2617]: E0129 11:14:31.559255 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.560757 kubelet[2617]: E0129 11:14:31.560718 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.560757 kubelet[2617]: W0129 11:14:31.560734 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.560757 kubelet[2617]: E0129 11:14:31.560747 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.561653 kubelet[2617]: E0129 11:14:31.561631 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.561653 kubelet[2617]: W0129 11:14:31.561648 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.561748 kubelet[2617]: E0129 11:14:31.561661 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.562040 kubelet[2617]: E0129 11:14:31.562002 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.562040 kubelet[2617]: W0129 11:14:31.562035 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.562138 kubelet[2617]: E0129 11:14:31.562048 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.562295 kubelet[2617]: E0129 11:14:31.562274 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.562295 kubelet[2617]: W0129 11:14:31.562287 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.562295 kubelet[2617]: E0129 11:14:31.562297 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.562497 kubelet[2617]: E0129 11:14:31.562478 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.562497 kubelet[2617]: W0129 11:14:31.562490 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.562497 kubelet[2617]: E0129 11:14:31.562500 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.562741 kubelet[2617]: E0129 11:14:31.562721 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.562741 kubelet[2617]: W0129 11:14:31.562733 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.563553 kubelet[2617]: E0129 11:14:31.563414 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.563792 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575475 kubelet[2617]: W0129 11:14:31.563804 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.563816 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.565050 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575475 kubelet[2617]: W0129 11:14:31.565063 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.565086 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.565331 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575475 kubelet[2617]: W0129 11:14:31.565347 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.565362 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575475 kubelet[2617]: E0129 11:14:31.565561 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575900 kubelet[2617]: W0129 11:14:31.565571 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.575900 kubelet[2617]: E0129 11:14:31.565586 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575900 kubelet[2617]: E0129 11:14:31.565775 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575900 kubelet[2617]: W0129 11:14:31.565785 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.575900 kubelet[2617]: E0129 11:14:31.565800 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575900 kubelet[2617]: E0129 11:14:31.566480 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575900 kubelet[2617]: W0129 11:14:31.566493 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.575900 kubelet[2617]: E0129 11:14:31.566518 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.575900 kubelet[2617]: E0129 11:14:31.567042 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.575900 kubelet[2617]: W0129 11:14:31.567056 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567074 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567462 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576108 kubelet[2617]: W0129 11:14:31.567474 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567518 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567685 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576108 kubelet[2617]: W0129 11:14:31.567696 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567737 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567849 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576108 kubelet[2617]: W0129 11:14:31.567857 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576108 kubelet[2617]: E0129 11:14:31.567903 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.568114 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576304 kubelet[2617]: W0129 11:14:31.568125 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.568141 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.568291 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576304 kubelet[2617]: W0129 11:14:31.568298 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.568312 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.568517 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576304 kubelet[2617]: W0129 11:14:31.568526 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.568555 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576304 kubelet[2617]: E0129 11:14:31.569009 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576649 kubelet[2617]: W0129 11:14:31.569025 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576649 kubelet[2617]: E0129 11:14:31.569038 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576649 kubelet[2617]: E0129 11:14:31.569187 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576649 kubelet[2617]: W0129 11:14:31.569193 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576649 kubelet[2617]: E0129 11:14:31.569201 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576649 kubelet[2617]: E0129 11:14:31.569367 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576649 kubelet[2617]: W0129 11:14:31.569374 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.576649 kubelet[2617]: E0129 11:14:31.569384 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.576649 kubelet[2617]: E0129 11:14:31.569789 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.576649 kubelet[2617]: W0129 11:14:31.569802 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.577046 kubelet[2617]: E0129 11:14:31.569823 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:31.577046 kubelet[2617]: E0129 11:14:31.571548 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:31.577046 kubelet[2617]: W0129 11:14:31.571564 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:31.577046 kubelet[2617]: E0129 11:14:31.571585 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.389932 containerd[1449]: time="2025-01-29T11:14:32.389889028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:32.390774 containerd[1449]: time="2025-01-29T11:14:32.390727990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 11:14:32.391770 containerd[1449]: time="2025-01-29T11:14:32.391740369Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:32.394178 containerd[1449]: time="2025-01-29T11:14:32.394140965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:32.395545 containerd[1449]: time="2025-01-29T11:14:32.395501939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.035830356s" Jan 29 11:14:32.395575 containerd[1449]: time="2025-01-29T11:14:32.395560144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 11:14:32.400983 containerd[1449]: time="2025-01-29T11:14:32.400938553Z" level=info msg="CreateContainer within sandbox \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:14:32.412047 containerd[1449]: time="2025-01-29T11:14:32.412003039Z" level=info msg="CreateContainer within sandbox \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330\"" Jan 29 11:14:32.414079 containerd[1449]: time="2025-01-29T11:14:32.412531051Z" level=info msg="StartContainer for \"8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330\"" Jan 29 11:14:32.444734 systemd[1]: Started cri-containerd-8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330.scope - libcontainer container 8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330. Jan 29 11:14:32.454573 kubelet[2617]: I0129 11:14:32.454459 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:14:32.455171 kubelet[2617]: E0129 11:14:32.455119 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:32.471025 kubelet[2617]: E0129 11:14:32.470996 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.471025 kubelet[2617]: W0129 11:14:32.471018 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.471187 kubelet[2617]: E0129 11:14:32.471037 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.471251 kubelet[2617]: E0129 11:14:32.471229 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.471251 kubelet[2617]: W0129 11:14:32.471247 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.471306 kubelet[2617]: E0129 11:14:32.471255 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.471435 kubelet[2617]: E0129 11:14:32.471416 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.471435 kubelet[2617]: W0129 11:14:32.471434 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.471497 kubelet[2617]: E0129 11:14:32.471443 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.471625 kubelet[2617]: E0129 11:14:32.471613 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.471663 kubelet[2617]: W0129 11:14:32.471624 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.471663 kubelet[2617]: E0129 11:14:32.471639 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.471813 kubelet[2617]: E0129 11:14:32.471798 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.471813 kubelet[2617]: W0129 11:14:32.471809 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.471872 kubelet[2617]: E0129 11:14:32.471818 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.471960 kubelet[2617]: E0129 11:14:32.471945 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.471986 kubelet[2617]: W0129 11:14:32.471959 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.471986 kubelet[2617]: E0129 11:14:32.471967 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.472113 kubelet[2617]: E0129 11:14:32.472104 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.472137 kubelet[2617]: W0129 11:14:32.472112 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.472137 kubelet[2617]: E0129 11:14:32.472122 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.472265 kubelet[2617]: E0129 11:14:32.472251 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.472291 kubelet[2617]: W0129 11:14:32.472265 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.472291 kubelet[2617]: E0129 11:14:32.472273 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.472452 kubelet[2617]: E0129 11:14:32.472441 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.472481 kubelet[2617]: W0129 11:14:32.472451 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.472481 kubelet[2617]: E0129 11:14:32.472460 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.472622 kubelet[2617]: E0129 11:14:32.472610 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.472622 kubelet[2617]: W0129 11:14:32.472621 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.472681 kubelet[2617]: E0129 11:14:32.472630 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.472773 kubelet[2617]: E0129 11:14:32.472761 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.472802 kubelet[2617]: W0129 11:14:32.472775 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.472802 kubelet[2617]: E0129 11:14:32.472783 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.472932 kubelet[2617]: E0129 11:14:32.472915 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.472932 kubelet[2617]: W0129 11:14:32.472925 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.472984 kubelet[2617]: E0129 11:14:32.472932 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.473075 kubelet[2617]: E0129 11:14:32.473063 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.473104 kubelet[2617]: W0129 11:14:32.473074 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.473104 kubelet[2617]: E0129 11:14:32.473088 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.473234 kubelet[2617]: E0129 11:14:32.473219 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.473255 kubelet[2617]: W0129 11:14:32.473234 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.473255 kubelet[2617]: E0129 11:14:32.473242 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.473386 kubelet[2617]: E0129 11:14:32.473372 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.473411 kubelet[2617]: W0129 11:14:32.473387 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.473411 kubelet[2617]: E0129 11:14:32.473395 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.474696 kubelet[2617]: E0129 11:14:32.474667 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.474696 kubelet[2617]: W0129 11:14:32.474685 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.474696 kubelet[2617]: E0129 11:14:32.474698 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.474911 kubelet[2617]: E0129 11:14:32.474898 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.474911 kubelet[2617]: W0129 11:14:32.474910 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.474971 kubelet[2617]: E0129 11:14:32.474920 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.475138 kubelet[2617]: E0129 11:14:32.475124 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.475138 kubelet[2617]: W0129 11:14:32.475135 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.475201 kubelet[2617]: E0129 11:14:32.475146 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.475363 kubelet[2617]: E0129 11:14:32.475349 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.475363 kubelet[2617]: W0129 11:14:32.475360 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.475425 kubelet[2617]: E0129 11:14:32.475371 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.475632 kubelet[2617]: E0129 11:14:32.475618 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.475632 kubelet[2617]: W0129 11:14:32.475631 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.475712 kubelet[2617]: E0129 11:14:32.475654 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.475828 kubelet[2617]: E0129 11:14:32.475815 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.475861 kubelet[2617]: W0129 11:14:32.475828 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.475861 kubelet[2617]: E0129 11:14:32.475843 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.476055 kubelet[2617]: E0129 11:14:32.476041 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.476055 kubelet[2617]: W0129 11:14:32.476053 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.476128 kubelet[2617]: E0129 11:14:32.476078 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.476230 kubelet[2617]: E0129 11:14:32.476219 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.476230 kubelet[2617]: W0129 11:14:32.476230 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.476337 kubelet[2617]: E0129 11:14:32.476299 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.476487 kubelet[2617]: E0129 11:14:32.476472 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.476487 kubelet[2617]: W0129 11:14:32.476482 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.476587 kubelet[2617]: E0129 11:14:32.476492 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.476826 kubelet[2617]: E0129 11:14:32.476796 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.476826 kubelet[2617]: W0129 11:14:32.476812 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.476826 kubelet[2617]: E0129 11:14:32.476823 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.476995 kubelet[2617]: E0129 11:14:32.476983 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.476995 kubelet[2617]: W0129 11:14:32.476995 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.477049 kubelet[2617]: E0129 11:14:32.477004 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.477197 kubelet[2617]: E0129 11:14:32.477183 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.477197 kubelet[2617]: W0129 11:14:32.477194 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.477262 kubelet[2617]: E0129 11:14:32.477203 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.477621 kubelet[2617]: E0129 11:14:32.477604 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.477621 kubelet[2617]: W0129 11:14:32.477620 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.477683 kubelet[2617]: E0129 11:14:32.477630 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.477846 kubelet[2617]: E0129 11:14:32.477834 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.477846 kubelet[2617]: W0129 11:14:32.477845 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.477911 kubelet[2617]: E0129 11:14:32.477854 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.478009 kubelet[2617]: E0129 11:14:32.477999 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.478039 kubelet[2617]: W0129 11:14:32.478009 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.478039 kubelet[2617]: E0129 11:14:32.478018 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.478189 kubelet[2617]: E0129 11:14:32.478174 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.478189 kubelet[2617]: W0129 11:14:32.478187 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.478245 kubelet[2617]: E0129 11:14:32.478196 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.478402 kubelet[2617]: E0129 11:14:32.478388 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.478402 kubelet[2617]: W0129 11:14:32.478400 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.478468 kubelet[2617]: E0129 11:14:32.478408 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.478737 kubelet[2617]: E0129 11:14:32.478724 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:14:32.478737 kubelet[2617]: W0129 11:14:32.478736 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:14:32.478804 kubelet[2617]: E0129 11:14:32.478746 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:14:32.488916 containerd[1449]: time="2025-01-29T11:14:32.488872227Z" level=info msg="StartContainer for \"8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330\" returns successfully" Jan 29 11:14:32.527359 systemd[1]: cri-containerd-8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330.scope: Deactivated successfully. Jan 29 11:14:32.547717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330-rootfs.mount: Deactivated successfully. Jan 29 11:14:32.613116 containerd[1449]: time="2025-01-29T11:14:32.602749729Z" level=info msg="shim disconnected" id=8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330 namespace=k8s.io Jan 29 11:14:32.613116 containerd[1449]: time="2025-01-29T11:14:32.613106426Z" level=warning msg="cleaning up after shim disconnected" id=8b8041e70a9e334f6e9dbab53aa2e94f77da22ba96b4c94e1d544efdb11e2330 namespace=k8s.io Jan 29 11:14:32.613116 containerd[1449]: time="2025-01-29T11:14:32.613121387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:33.389855 kubelet[2617]: E0129 11:14:33.388848 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:33.455045 kubelet[2617]: E0129 11:14:33.455004 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:33.457504 containerd[1449]: time="2025-01-29T11:14:33.457286118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:14:35.389578 kubelet[2617]: E0129 11:14:35.389409 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:35.670443 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:58574.service - OpenSSH per-connection server daemon (10.0.0.1:58574). Jan 29 11:14:35.715357 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 58574 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:35.716751 sshd-session[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:35.722016 systemd-logind[1426]: New session 8 of user core. Jan 29 11:14:35.728684 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:14:35.917932 sshd[3323]: Connection closed by 10.0.0.1 port 58574 Jan 29 11:14:35.918491 sshd-session[3321]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:35.923408 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:58574.service: Deactivated successfully. Jan 29 11:14:35.925213 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:14:35.925960 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:14:35.927196 systemd-logind[1426]: Removed session 8. Jan 29 11:14:36.756009 containerd[1449]: time="2025-01-29T11:14:36.755964980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:36.756855 containerd[1449]: time="2025-01-29T11:14:36.756738525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 11:14:36.758656 containerd[1449]: time="2025-01-29T11:14:36.757681365Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:36.759874 containerd[1449]: time="2025-01-29T11:14:36.759839866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:36.760651 containerd[1449]: time="2025-01-29T11:14:36.760628973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.303280969s" Jan 29 11:14:36.760742 containerd[1449]: time="2025-01-29T11:14:36.760727101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 11:14:36.762587 containerd[1449]: time="2025-01-29T11:14:36.762560415Z" level=info msg="CreateContainer within sandbox \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:14:36.774827 containerd[1449]: time="2025-01-29T11:14:36.774732119Z" level=info msg="CreateContainer within sandbox \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565\"" Jan 29 11:14:36.775576 containerd[1449]: time="2025-01-29T11:14:36.775114391Z" level=info msg="StartContainer for \"ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565\"" Jan 29 11:14:36.807771 systemd[1]: Started cri-containerd-ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565.scope - libcontainer container ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565. Jan 29 11:14:36.831064 containerd[1449]: time="2025-01-29T11:14:36.831009214Z" level=info msg="StartContainer for \"ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565\" returns successfully" Jan 29 11:14:37.327345 systemd[1]: cri-containerd-ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565.scope: Deactivated successfully. Jan 29 11:14:37.346386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565-rootfs.mount: Deactivated successfully. Jan 29 11:14:37.380687 kubelet[2617]: I0129 11:14:37.379954 2617 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:14:37.398691 systemd[1]: Created slice kubepods-besteffort-pod51246db2_a0a0_40ce_bf4c_e10522a304db.slice - libcontainer container kubepods-besteffort-pod51246db2_a0a0_40ce_bf4c_e10522a304db.slice. Jan 29 11:14:37.402446 containerd[1449]: time="2025-01-29T11:14:37.402405725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:0,}" Jan 29 11:14:37.416895 kubelet[2617]: I0129 11:14:37.416851 2617 topology_manager.go:215] "Topology Admit Handler" podUID="64991b07-942c-4246-a46f-4589ee4a9827" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:37.424577 kubelet[2617]: I0129 11:14:37.424163 2617 topology_manager.go:215] "Topology Admit Handler" podUID="5db914b0-6a91-420a-9300-e102983010e9" podNamespace="calico-apiserver" podName="calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:37.425615 kubelet[2617]: I0129 11:14:37.425380 2617 topology_manager.go:215] "Topology Admit Handler" podUID="06aa249c-a866-428e-8d59-48acbc7fcd5e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:37.426754 kubelet[2617]: I0129 11:14:37.426721 2617 topology_manager.go:215] "Topology Admit Handler" podUID="3ebd6aa9-d128-4a03-9b92-9b846f7c50c7" podNamespace="calico-system" podName="calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:37.428158 kubelet[2617]: I0129 11:14:37.427990 2617 topology_manager.go:215] "Topology Admit Handler" podUID="a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7" podNamespace="calico-apiserver" podName="calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:37.434578 systemd[1]: Created slice kubepods-burstable-pod64991b07_942c_4246_a46f_4589ee4a9827.slice - libcontainer container kubepods-burstable-pod64991b07_942c_4246_a46f_4589ee4a9827.slice. Jan 29 11:14:37.442999 systemd[1]: Created slice kubepods-besteffort-pod5db914b0_6a91_420a_9300_e102983010e9.slice - libcontainer container kubepods-besteffort-pod5db914b0_6a91_420a_9300_e102983010e9.slice. Jan 29 11:14:37.451251 containerd[1449]: time="2025-01-29T11:14:37.450521429Z" level=info msg="shim disconnected" id=ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565 namespace=k8s.io Jan 29 11:14:37.451251 containerd[1449]: time="2025-01-29T11:14:37.450614516Z" level=warning msg="cleaning up after shim disconnected" id=ce15e3377f3b526d726e55594adadbb637d084c31fee8cd2c1d663086ea47565 namespace=k8s.io Jan 29 11:14:37.451251 containerd[1449]: time="2025-01-29T11:14:37.450626157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:37.454384 systemd[1]: Created slice kubepods-burstable-pod06aa249c_a866_428e_8d59_48acbc7fcd5e.slice - libcontainer container kubepods-burstable-pod06aa249c_a866_428e_8d59_48acbc7fcd5e.slice. Jan 29 11:14:37.466399 kubelet[2617]: E0129 11:14:37.466367 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:37.466931 systemd[1]: Created slice kubepods-besteffort-pod3ebd6aa9_d128_4a03_9b92_9b846f7c50c7.slice - libcontainer container kubepods-besteffort-pod3ebd6aa9_d128_4a03_9b92_9b846f7c50c7.slice. Jan 29 11:14:37.483255 systemd[1]: Created slice kubepods-besteffort-poda9b2c67e_8a72_413f_8ca7_b4e54dd85bb7.slice - libcontainer container kubepods-besteffort-poda9b2c67e_8a72_413f_8ca7_b4e54dd85bb7.slice. Jan 29 11:14:37.509370 kubelet[2617]: I0129 11:14:37.509321 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktv4q\" (UniqueName: \"kubernetes.io/projected/5db914b0-6a91-420a-9300-e102983010e9-kube-api-access-ktv4q\") pod \"calico-apiserver-5fc6dd774d-6k2vx\" (UID: \"5db914b0-6a91-420a-9300-e102983010e9\") " pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:37.509370 kubelet[2617]: I0129 11:14:37.509370 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7-calico-apiserver-certs\") pod \"calico-apiserver-5fc6dd774d-mfrdw\" (UID: \"a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7\") " pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:37.509525 kubelet[2617]: I0129 11:14:37.509390 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ebd6aa9-d128-4a03-9b92-9b846f7c50c7-tigera-ca-bundle\") pod \"calico-kube-controllers-67779b498c-2wfqf\" (UID: \"3ebd6aa9-d128-4a03-9b92-9b846f7c50c7\") " pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:37.509525 kubelet[2617]: I0129 11:14:37.509408 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqtm8\" (UniqueName: \"kubernetes.io/projected/a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7-kube-api-access-gqtm8\") pod \"calico-apiserver-5fc6dd774d-mfrdw\" (UID: \"a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7\") " pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:37.509525 kubelet[2617]: I0129 11:14:37.509426 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64991b07-942c-4246-a46f-4589ee4a9827-config-volume\") pod \"coredns-7db6d8ff4d-r4x6g\" (UID: \"64991b07-942c-4246-a46f-4589ee4a9827\") " pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:37.509525 kubelet[2617]: I0129 11:14:37.509444 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7wl\" (UniqueName: \"kubernetes.io/projected/64991b07-942c-4246-a46f-4589ee4a9827-kube-api-access-tf7wl\") pod \"coredns-7db6d8ff4d-r4x6g\" (UID: \"64991b07-942c-4246-a46f-4589ee4a9827\") " pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:37.509525 kubelet[2617]: I0129 11:14:37.509462 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq4bq\" (UniqueName: \"kubernetes.io/projected/3ebd6aa9-d128-4a03-9b92-9b846f7c50c7-kube-api-access-sq4bq\") pod \"calico-kube-controllers-67779b498c-2wfqf\" (UID: \"3ebd6aa9-d128-4a03-9b92-9b846f7c50c7\") " pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:37.509661 kubelet[2617]: I0129 11:14:37.509482 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5db914b0-6a91-420a-9300-e102983010e9-calico-apiserver-certs\") pod \"calico-apiserver-5fc6dd774d-6k2vx\" (UID: \"5db914b0-6a91-420a-9300-e102983010e9\") " pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:37.509661 kubelet[2617]: I0129 11:14:37.509498 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06aa249c-a866-428e-8d59-48acbc7fcd5e-config-volume\") pod \"coredns-7db6d8ff4d-wlnjj\" (UID: \"06aa249c-a866-428e-8d59-48acbc7fcd5e\") " pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:37.509661 kubelet[2617]: I0129 11:14:37.509515 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbnc\" (UniqueName: \"kubernetes.io/projected/06aa249c-a866-428e-8d59-48acbc7fcd5e-kube-api-access-thbnc\") pod \"coredns-7db6d8ff4d-wlnjj\" (UID: \"06aa249c-a866-428e-8d59-48acbc7fcd5e\") " pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:37.614261 kubelet[2617]: I0129 11:14:37.612726 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:14:37.614261 kubelet[2617]: E0129 11:14:37.613350 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:37.739469 kubelet[2617]: E0129 11:14:37.739425 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:37.741951 containerd[1449]: time="2025-01-29T11:14:37.741798544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:37.742887 containerd[1449]: time="2025-01-29T11:14:37.742854870Z" level=error msg="Failed to destroy network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.749119 containerd[1449]: time="2025-01-29T11:14:37.749066614Z" level=error msg="encountered an error cleaning up failed sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.749235 containerd[1449]: time="2025-01-29T11:14:37.749162822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.749443 containerd[1449]: time="2025-01-29T11:14:37.749412802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:14:37.758305 kubelet[2617]: E0129 11:14:37.758234 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.760072 kubelet[2617]: E0129 11:14:37.758326 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:37.760072 kubelet[2617]: E0129 11:14:37.758348 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:37.760072 kubelet[2617]: E0129 11:14:37.758395 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:37.766341 kubelet[2617]: E0129 11:14:37.765972 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:37.767187 containerd[1449]: time="2025-01-29T11:14:37.767135480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:37.771532 containerd[1449]: time="2025-01-29T11:14:37.771472152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:0,}" Jan 29 11:14:37.780524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871-shm.mount: Deactivated successfully. Jan 29 11:14:37.790413 containerd[1449]: time="2025-01-29T11:14:37.790350924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:14:37.840662 containerd[1449]: time="2025-01-29T11:14:37.840570959Z" level=error msg="Failed to destroy network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.841440 containerd[1449]: time="2025-01-29T11:14:37.841055358Z" level=error msg="encountered an error cleaning up failed sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.841440 containerd[1449]: time="2025-01-29T11:14:37.841132524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.841528 kubelet[2617]: E0129 11:14:37.841479 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.841600 kubelet[2617]: E0129 11:14:37.841580 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:37.841631 kubelet[2617]: E0129 11:14:37.841604 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:37.841717 kubelet[2617]: E0129 11:14:37.841663 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r4x6g" podUID="64991b07-942c-4246-a46f-4589ee4a9827" Jan 29 11:14:37.889608 containerd[1449]: time="2025-01-29T11:14:37.889438004Z" level=error msg="Failed to destroy network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.889956 containerd[1449]: time="2025-01-29T11:14:37.889833276Z" level=error msg="encountered an error cleaning up failed sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.890052 containerd[1449]: time="2025-01-29T11:14:37.889991009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.890139 containerd[1449]: time="2025-01-29T11:14:37.889915723Z" level=error msg="Failed to destroy network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.890726 containerd[1449]: time="2025-01-29T11:14:37.890432045Z" level=error msg="encountered an error cleaning up failed sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.890726 containerd[1449]: time="2025-01-29T11:14:37.890474088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.891297 containerd[1449]: time="2025-01-29T11:14:37.891000651Z" level=error msg="Failed to destroy network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.891328 kubelet[2617]: E0129 11:14:37.890883 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.891328 kubelet[2617]: E0129 11:14:37.890937 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:37.891328 kubelet[2617]: E0129 11:14:37.890956 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:37.891328 kubelet[2617]: E0129 11:14:37.890882 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.891424 kubelet[2617]: E0129 11:14:37.891051 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:37.891424 kubelet[2617]: E0129 11:14:37.891049 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlnjj" podUID="06aa249c-a866-428e-8d59-48acbc7fcd5e" Jan 29 11:14:37.891424 kubelet[2617]: E0129 11:14:37.891072 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:37.891504 kubelet[2617]: E0129 11:14:37.891110 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" podUID="a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7" Jan 29 11:14:37.892133 containerd[1449]: time="2025-01-29T11:14:37.892086739Z" level=error msg="encountered an error cleaning up failed sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.892181 containerd[1449]: time="2025-01-29T11:14:37.892141103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.892744 kubelet[2617]: E0129 11:14:37.892667 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.892744 kubelet[2617]: E0129 11:14:37.892701 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:37.893788 kubelet[2617]: E0129 11:14:37.892716 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:37.893788 kubelet[2617]: E0129 11:14:37.893484 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" podUID="5db914b0-6a91-420a-9300-e102983010e9" Jan 29 11:14:37.905971 containerd[1449]: time="2025-01-29T11:14:37.905930502Z" level=error msg="Failed to destroy network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.906245 containerd[1449]: time="2025-01-29T11:14:37.906220686Z" level=error msg="encountered an error cleaning up failed sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.906290 containerd[1449]: time="2025-01-29T11:14:37.906268770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.906486 kubelet[2617]: E0129 11:14:37.906454 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:37.906523 kubelet[2617]: E0129 11:14:37.906505 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:37.906575 kubelet[2617]: E0129 11:14:37.906522 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:37.906618 kubelet[2617]: E0129 11:14:37.906587 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" podUID="3ebd6aa9-d128-4a03-9b92-9b846f7c50c7" Jan 29 11:14:38.468621 kubelet[2617]: I0129 11:14:38.468522 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9" Jan 29 11:14:38.469825 containerd[1449]: time="2025-01-29T11:14:38.469424154Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:14:38.469825 containerd[1449]: time="2025-01-29T11:14:38.469600727Z" level=info msg="Ensure that sandbox e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9 in task-service has been cleanup successfully" Jan 29 11:14:38.470155 containerd[1449]: time="2025-01-29T11:14:38.470122968Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:14:38.470155 containerd[1449]: time="2025-01-29T11:14:38.470150130Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:14:38.470532 kubelet[2617]: I0129 11:14:38.470509 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f" Jan 29 11:14:38.470683 containerd[1449]: time="2025-01-29T11:14:38.470658490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:1,}" Jan 29 11:14:38.471031 containerd[1449]: time="2025-01-29T11:14:38.470942593Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:14:38.471356 containerd[1449]: time="2025-01-29T11:14:38.471319462Z" level=info msg="Ensure that sandbox a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f in task-service has been cleanup successfully" Jan 29 11:14:38.471742 containerd[1449]: time="2025-01-29T11:14:38.471571282Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:14:38.471742 containerd[1449]: time="2025-01-29T11:14:38.471639967Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:14:38.471914 kubelet[2617]: E0129 11:14:38.471870 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:38.472511 kubelet[2617]: I0129 11:14:38.472488 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5" Jan 29 11:14:38.473073 containerd[1449]: time="2025-01-29T11:14:38.473037037Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:14:38.473474 containerd[1449]: time="2025-01-29T11:14:38.473040397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:1,}" Jan 29 11:14:38.473663 containerd[1449]: time="2025-01-29T11:14:38.473630323Z" level=info msg="Ensure that sandbox ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5 in task-service has been cleanup successfully" Jan 29 11:14:38.474068 containerd[1449]: time="2025-01-29T11:14:38.474010753Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:14:38.474157 containerd[1449]: time="2025-01-29T11:14:38.474140963Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:14:38.474444 kubelet[2617]: I0129 11:14:38.474425 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382" Jan 29 11:14:38.474829 containerd[1449]: time="2025-01-29T11:14:38.474741370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:14:38.475076 containerd[1449]: time="2025-01-29T11:14:38.475045034Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:14:38.475215 containerd[1449]: time="2025-01-29T11:14:38.475187045Z" level=info msg="Ensure that sandbox bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382 in task-service has been cleanup successfully" Jan 29 11:14:38.475620 containerd[1449]: time="2025-01-29T11:14:38.475596557Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:14:38.475620 containerd[1449]: time="2025-01-29T11:14:38.475619239Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:14:38.476385 containerd[1449]: time="2025-01-29T11:14:38.476062954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:1,}" Jan 29 11:14:38.476430 kubelet[2617]: E0129 11:14:38.475829 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:38.478398 kubelet[2617]: E0129 11:14:38.478371 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:38.479377 containerd[1449]: time="2025-01-29T11:14:38.479340210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:14:38.480009 kubelet[2617]: I0129 11:14:38.479984 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c" Jan 29 11:14:38.480486 containerd[1449]: time="2025-01-29T11:14:38.480451338Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:14:38.480669 containerd[1449]: time="2025-01-29T11:14:38.480618151Z" level=info msg="Ensure that sandbox a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c in task-service has been cleanup successfully" Jan 29 11:14:38.480826 containerd[1449]: time="2025-01-29T11:14:38.480786884Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:14:38.480826 containerd[1449]: time="2025-01-29T11:14:38.480806325Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:14:38.481661 containerd[1449]: time="2025-01-29T11:14:38.481603028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:14:38.481796 kubelet[2617]: I0129 11:14:38.481762 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871" Jan 29 11:14:38.482553 kubelet[2617]: E0129 11:14:38.482223 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:38.483883 containerd[1449]: time="2025-01-29T11:14:38.483785079Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:14:38.484670 containerd[1449]: time="2025-01-29T11:14:38.484396407Z" level=info msg="Ensure that sandbox 3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871 in task-service has been cleanup successfully" Jan 29 11:14:38.484999 containerd[1449]: time="2025-01-29T11:14:38.484950650Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:14:38.484999 containerd[1449]: time="2025-01-29T11:14:38.484972052Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:14:38.486984 containerd[1449]: time="2025-01-29T11:14:38.486946806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:1,}" Jan 29 11:14:38.588301 containerd[1449]: time="2025-01-29T11:14:38.588151735Z" level=error msg="Failed to destroy network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.589231 containerd[1449]: time="2025-01-29T11:14:38.589190937Z" level=error msg="encountered an error cleaning up failed sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.589671 containerd[1449]: time="2025-01-29T11:14:38.589446957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.590567 kubelet[2617]: E0129 11:14:38.590288 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.590567 kubelet[2617]: E0129 11:14:38.590344 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:38.590567 kubelet[2617]: E0129 11:14:38.590364 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:38.590903 kubelet[2617]: E0129 11:14:38.590404 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" podUID="3ebd6aa9-d128-4a03-9b92-9b846f7c50c7" Jan 29 11:14:38.596080 containerd[1449]: time="2025-01-29T11:14:38.596027152Z" level=error msg="Failed to destroy network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.596939 containerd[1449]: time="2025-01-29T11:14:38.596896180Z" level=error msg="encountered an error cleaning up failed sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.599489 containerd[1449]: time="2025-01-29T11:14:38.597511509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.599610 kubelet[2617]: E0129 11:14:38.597797 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.599610 kubelet[2617]: E0129 11:14:38.597849 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:38.599610 kubelet[2617]: E0129 11:14:38.597869 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:38.599715 kubelet[2617]: E0129 11:14:38.597903 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlnjj" podUID="06aa249c-a866-428e-8d59-48acbc7fcd5e" Jan 29 11:14:38.603349 containerd[1449]: time="2025-01-29T11:14:38.603310483Z" level=error msg="Failed to destroy network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.604096 containerd[1449]: time="2025-01-29T11:14:38.603705234Z" level=error msg="encountered an error cleaning up failed sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.604096 containerd[1449]: time="2025-01-29T11:14:38.603762758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.604195 kubelet[2617]: E0129 11:14:38.603939 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.604195 kubelet[2617]: E0129 11:14:38.603979 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:38.604195 kubelet[2617]: E0129 11:14:38.603998 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:38.604263 kubelet[2617]: E0129 11:14:38.604039 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r4x6g" podUID="64991b07-942c-4246-a46f-4589ee4a9827" Jan 29 11:14:38.614460 containerd[1449]: time="2025-01-29T11:14:38.614363949Z" level=error msg="Failed to destroy network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.615253 containerd[1449]: time="2025-01-29T11:14:38.614948755Z" level=error msg="encountered an error cleaning up failed sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.615253 containerd[1449]: time="2025-01-29T11:14:38.615008279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.615474 kubelet[2617]: E0129 11:14:38.615313 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.615474 kubelet[2617]: E0129 11:14:38.615381 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:38.615474 kubelet[2617]: E0129 11:14:38.615400 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:38.615602 kubelet[2617]: E0129 11:14:38.615453 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" podUID="5db914b0-6a91-420a-9300-e102983010e9" Jan 29 11:14:38.629259 containerd[1449]: time="2025-01-29T11:14:38.629216793Z" level=error msg="Failed to destroy network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.629775 containerd[1449]: time="2025-01-29T11:14:38.629674228Z" level=error msg="encountered an error cleaning up failed sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.629775 containerd[1449]: time="2025-01-29T11:14:38.629734633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.630032 containerd[1449]: time="2025-01-29T11:14:38.629992173Z" level=error msg="Failed to destroy network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.630377 kubelet[2617]: E0129 11:14:38.630119 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.630377 kubelet[2617]: E0129 11:14:38.630176 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:38.630377 kubelet[2617]: E0129 11:14:38.630195 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:38.630489 containerd[1449]: time="2025-01-29T11:14:38.630301038Z" level=error msg="encountered an error cleaning up failed sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.630489 containerd[1449]: time="2025-01-29T11:14:38.630342481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.630533 kubelet[2617]: E0129 11:14:38.630235 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" podUID="a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7" Jan 29 11:14:38.630749 kubelet[2617]: E0129 11:14:38.630718 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:38.630794 kubelet[2617]: E0129 11:14:38.630762 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:38.630794 kubelet[2617]: E0129 11:14:38.630779 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:38.630838 kubelet[2617]: E0129 11:14:38.630805 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:38.772868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9-shm.mount: Deactivated successfully. Jan 29 11:14:38.772956 systemd[1]: run-netns-cni\x2d3a9007bd\x2d9185\x2d2f81\x2da164\x2d4fa223c8d15c.mount: Deactivated successfully. Jan 29 11:14:38.773002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c-shm.mount: Deactivated successfully. Jan 29 11:14:38.773047 systemd[1]: run-netns-cni\x2db558b347\x2d2cf4\x2d17b3\x2dec8a\x2d8bfebc8f416e.mount: Deactivated successfully. Jan 29 11:14:38.773105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5-shm.mount: Deactivated successfully. Jan 29 11:14:38.773157 systemd[1]: run-netns-cni\x2d69c366b9\x2dad33\x2d45e7\x2d909f\x2d2d28566f22cf.mount: Deactivated successfully. Jan 29 11:14:38.773205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382-shm.mount: Deactivated successfully. Jan 29 11:14:38.773256 systemd[1]: run-netns-cni\x2df07bc5ee\x2decd3\x2dd531\x2d1d8b\x2da6be97757601.mount: Deactivated successfully. Jan 29 11:14:39.485785 kubelet[2617]: I0129 11:14:39.485147 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b" Jan 29 11:14:39.486224 containerd[1449]: time="2025-01-29T11:14:39.485661058Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:14:39.486224 containerd[1449]: time="2025-01-29T11:14:39.485840112Z" level=info msg="Ensure that sandbox 65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b in task-service has been cleanup successfully" Jan 29 11:14:39.487671 containerd[1449]: time="2025-01-29T11:14:39.487620727Z" level=info msg="TearDown network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" successfully" Jan 29 11:14:39.487671 containerd[1449]: time="2025-01-29T11:14:39.487646288Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" returns successfully" Jan 29 11:14:39.488520 systemd[1]: run-netns-cni\x2d2f01b6ca\x2d37a6\x2d1e43\x2d43f0\x2d2614c3c33b7c.mount: Deactivated successfully. Jan 29 11:14:39.488656 containerd[1449]: time="2025-01-29T11:14:39.488592320Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:14:39.488689 containerd[1449]: time="2025-01-29T11:14:39.488676807Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:14:39.488714 containerd[1449]: time="2025-01-29T11:14:39.488687447Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:14:39.489265 containerd[1449]: time="2025-01-29T11:14:39.489215407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:14:39.489322 kubelet[2617]: I0129 11:14:39.489258 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03" Jan 29 11:14:39.489731 containerd[1449]: time="2025-01-29T11:14:39.489700324Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:14:39.489949 containerd[1449]: time="2025-01-29T11:14:39.489922021Z" level=info msg="Ensure that sandbox 709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03 in task-service has been cleanup successfully" Jan 29 11:14:39.491100 containerd[1449]: time="2025-01-29T11:14:39.491014744Z" level=info msg="TearDown network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" successfully" Jan 29 11:14:39.491100 containerd[1449]: time="2025-01-29T11:14:39.491035705Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" returns successfully" Jan 29 11:14:39.491610 containerd[1449]: time="2025-01-29T11:14:39.491568065Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:14:39.491665 containerd[1449]: time="2025-01-29T11:14:39.491647071Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:14:39.491665 containerd[1449]: time="2025-01-29T11:14:39.491657992Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:14:39.492786 systemd[1]: run-netns-cni\x2d28adf04a\x2d5c4b\x2d5cbf\x2d961f\x2d8fd30b2404ed.mount: Deactivated successfully. Jan 29 11:14:39.492990 kubelet[2617]: E0129 11:14:39.492956 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:39.492990 kubelet[2617]: I0129 11:14:39.492979 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e" Jan 29 11:14:39.493274 containerd[1449]: time="2025-01-29T11:14:39.493242712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:2,}" Jan 29 11:14:39.494417 containerd[1449]: time="2025-01-29T11:14:39.494352356Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:14:39.494600 containerd[1449]: time="2025-01-29T11:14:39.494530290Z" level=info msg="Ensure that sandbox d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e in task-service has been cleanup successfully" Jan 29 11:14:39.498686 kubelet[2617]: I0129 11:14:39.494990 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e" Jan 29 11:14:39.498686 kubelet[2617]: I0129 11:14:39.498415 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583" Jan 29 11:14:39.497080 systemd[1]: run-netns-cni\x2d0c9e6dfc\x2d11b9\x2dda45\x2dfaed\x2d66d1863d1d93.mount: Deactivated successfully. Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.495456480Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.495619412Z" level=info msg="Ensure that sandbox 4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e in task-service has been cleanup successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.495931316Z" level=info msg="TearDown network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.495993161Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" returns successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.496483438Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.496572004Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.496581645Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.496792101Z" level=info msg="TearDown network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.496807942Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" returns successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.497233014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.497423549Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.497500595Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.497510835Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:14:39.498851 containerd[1449]: time="2025-01-29T11:14:39.498063717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:2,}" Jan 29 11:14:39.497167 systemd[1]: run-netns-cni\x2dd216f6fa\x2dd429\x2d8d25\x2dbf65\x2d167276c397c4.mount: Deactivated successfully. Jan 29 11:14:39.499188 containerd[1449]: time="2025-01-29T11:14:39.498861218Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:14:39.499188 containerd[1449]: time="2025-01-29T11:14:39.498992708Z" level=info msg="Ensure that sandbox 8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583 in task-service has been cleanup successfully" Jan 29 11:14:39.499666 containerd[1449]: time="2025-01-29T11:14:39.499488985Z" level=info msg="TearDown network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" successfully" Jan 29 11:14:39.499666 containerd[1449]: time="2025-01-29T11:14:39.499658278Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" returns successfully" Jan 29 11:14:39.500311 kubelet[2617]: I0129 11:14:39.500287 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381" Jan 29 11:14:39.500372 containerd[1449]: time="2025-01-29T11:14:39.500263204Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:14:39.500421 containerd[1449]: time="2025-01-29T11:14:39.500399654Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:14:39.500421 containerd[1449]: time="2025-01-29T11:14:39.500417216Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:14:39.500822 containerd[1449]: time="2025-01-29T11:14:39.500787044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:2,}" Jan 29 11:14:39.501014 containerd[1449]: time="2025-01-29T11:14:39.500982058Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:14:39.501139 containerd[1449]: time="2025-01-29T11:14:39.501107908Z" level=info msg="Ensure that sandbox 69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381 in task-service has been cleanup successfully" Jan 29 11:14:39.501399 containerd[1449]: time="2025-01-29T11:14:39.501375128Z" level=info msg="TearDown network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" successfully" Jan 29 11:14:39.501399 containerd[1449]: time="2025-01-29T11:14:39.501394610Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" returns successfully" Jan 29 11:14:39.501934 containerd[1449]: time="2025-01-29T11:14:39.501893167Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:14:39.502003 containerd[1449]: time="2025-01-29T11:14:39.501973533Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:14:39.502003 containerd[1449]: time="2025-01-29T11:14:39.501984094Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:14:39.502367 kubelet[2617]: E0129 11:14:39.502163 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:39.503489 containerd[1449]: time="2025-01-29T11:14:39.502913765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:2,}" Jan 29 11:14:39.761350 containerd[1449]: time="2025-01-29T11:14:39.761061272Z" level=error msg="Failed to destroy network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.761484 containerd[1449]: time="2025-01-29T11:14:39.761447981Z" level=error msg="encountered an error cleaning up failed sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.761568 containerd[1449]: time="2025-01-29T11:14:39.761511586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.761799 kubelet[2617]: E0129 11:14:39.761760 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.762212 kubelet[2617]: E0129 11:14:39.761912 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:39.762212 kubelet[2617]: E0129 11:14:39.761939 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:39.762212 kubelet[2617]: E0129 11:14:39.761984 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" podUID="3ebd6aa9-d128-4a03-9b92-9b846f7c50c7" Jan 29 11:14:39.777483 systemd[1]: run-netns-cni\x2d4510d5f2\x2d7f97\x2dae40\x2def79\x2d72afc6d44fd2.mount: Deactivated successfully. Jan 29 11:14:39.777878 systemd[1]: run-netns-cni\x2d706239d1\x2d35e8\x2dae37\x2dbdb3\x2df7502f19daf0.mount: Deactivated successfully. Jan 29 11:14:39.779930 containerd[1449]: time="2025-01-29T11:14:39.779677602Z" level=error msg="Failed to destroy network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.780314 containerd[1449]: time="2025-01-29T11:14:39.780272847Z" level=error msg="encountered an error cleaning up failed sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.780364 containerd[1449]: time="2025-01-29T11:14:39.780341612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.781593 kubelet[2617]: E0129 11:14:39.780688 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.781593 kubelet[2617]: E0129 11:14:39.780739 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:39.781593 kubelet[2617]: E0129 11:14:39.780759 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:39.781709 kubelet[2617]: E0129 11:14:39.780793 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r4x6g" podUID="64991b07-942c-4246-a46f-4589ee4a9827" Jan 29 11:14:39.782565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2-shm.mount: Deactivated successfully. Jan 29 11:14:39.800251 containerd[1449]: time="2025-01-29T11:14:39.799012226Z" level=error msg="Failed to destroy network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.800251 containerd[1449]: time="2025-01-29T11:14:39.799349451Z" level=error msg="encountered an error cleaning up failed sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.800251 containerd[1449]: time="2025-01-29T11:14:39.799412416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.800713 kubelet[2617]: E0129 11:14:39.800680 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.801026 kubelet[2617]: E0129 11:14:39.801002 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:39.801204 containerd[1449]: time="2025-01-29T11:14:39.801164909Z" level=error msg="Failed to destroy network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.802559 kubelet[2617]: E0129 11:14:39.801268 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:39.801989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a-shm.mount: Deactivated successfully. Jan 29 11:14:39.802687 containerd[1449]: time="2025-01-29T11:14:39.801437329Z" level=error msg="encountered an error cleaning up failed sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.802687 containerd[1449]: time="2025-01-29T11:14:39.802020734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.803134 kubelet[2617]: E0129 11:14:39.802174 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.803134 kubelet[2617]: E0129 11:14:39.802857 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:39.803134 kubelet[2617]: E0129 11:14:39.802876 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:39.803255 kubelet[2617]: E0129 11:14:39.802807 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" podUID="5db914b0-6a91-420a-9300-e102983010e9" Jan 29 11:14:39.803255 kubelet[2617]: E0129 11:14:39.802916 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlnjj" podUID="06aa249c-a866-428e-8d59-48acbc7fcd5e" Jan 29 11:14:39.805097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21-shm.mount: Deactivated successfully. Jan 29 11:14:39.810567 containerd[1449]: time="2025-01-29T11:14:39.810518017Z" level=error msg="Failed to destroy network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.811610 containerd[1449]: time="2025-01-29T11:14:39.811057578Z" level=error msg="encountered an error cleaning up failed sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.811610 containerd[1449]: time="2025-01-29T11:14:39.811112062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.811783 kubelet[2617]: E0129 11:14:39.811265 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.811783 kubelet[2617]: E0129 11:14:39.811300 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:39.811783 kubelet[2617]: E0129 11:14:39.811316 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:39.811877 kubelet[2617]: E0129 11:14:39.811392 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:39.812339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d-shm.mount: Deactivated successfully. Jan 29 11:14:39.814404 containerd[1449]: time="2025-01-29T11:14:39.814295023Z" level=error msg="Failed to destroy network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.815576 containerd[1449]: time="2025-01-29T11:14:39.815476832Z" level=error msg="encountered an error cleaning up failed sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.815872 containerd[1449]: time="2025-01-29T11:14:39.815532117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.817036 kubelet[2617]: E0129 11:14:39.816366 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:39.817036 kubelet[2617]: E0129 11:14:39.816405 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:39.817036 kubelet[2617]: E0129 11:14:39.816421 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:39.817175 kubelet[2617]: E0129 11:14:39.816449 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" podUID="a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7" Jan 29 11:14:40.510588 kubelet[2617]: I0129 11:14:40.509787 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21" Jan 29 11:14:40.511066 containerd[1449]: time="2025-01-29T11:14:40.510923638Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" Jan 29 11:14:40.511320 containerd[1449]: time="2025-01-29T11:14:40.511114812Z" level=info msg="Ensure that sandbox e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21 in task-service has been cleanup successfully" Jan 29 11:14:40.511384 containerd[1449]: time="2025-01-29T11:14:40.511305426Z" level=info msg="TearDown network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" successfully" Jan 29 11:14:40.511384 containerd[1449]: time="2025-01-29T11:14:40.511332668Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" returns successfully" Jan 29 11:14:40.513337 containerd[1449]: time="2025-01-29T11:14:40.513310533Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:14:40.513947 containerd[1449]: time="2025-01-29T11:14:40.513925218Z" level=info msg="TearDown network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" successfully" Jan 29 11:14:40.514242 containerd[1449]: time="2025-01-29T11:14:40.514168436Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" returns successfully" Jan 29 11:14:40.514468 kubelet[2617]: I0129 11:14:40.514449 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d" Jan 29 11:14:40.515099 containerd[1449]: time="2025-01-29T11:14:40.515068981Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" Jan 29 11:14:40.515386 containerd[1449]: time="2025-01-29T11:14:40.515220153Z" level=info msg="Ensure that sandbox f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d in task-service has been cleanup successfully" Jan 29 11:14:40.516151 containerd[1449]: time="2025-01-29T11:14:40.515261636Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:14:40.516151 containerd[1449]: time="2025-01-29T11:14:40.515456530Z" level=info msg="TearDown network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" successfully" Jan 29 11:14:40.516151 containerd[1449]: time="2025-01-29T11:14:40.515648264Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" returns successfully" Jan 29 11:14:40.516151 containerd[1449]: time="2025-01-29T11:14:40.515857159Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:14:40.516151 containerd[1449]: time="2025-01-29T11:14:40.515914683Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:14:40.516151 containerd[1449]: time="2025-01-29T11:14:40.515933925Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:14:40.516303 kubelet[2617]: E0129 11:14:40.516093 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:40.516303 kubelet[2617]: I0129 11:14:40.516200 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16" Jan 29 11:14:40.516533 containerd[1449]: time="2025-01-29T11:14:40.516436082Z" level=info msg="TearDown network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" successfully" Jan 29 11:14:40.516533 containerd[1449]: time="2025-01-29T11:14:40.516460883Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" returns successfully" Jan 29 11:14:40.516744 containerd[1449]: time="2025-01-29T11:14:40.516716102Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" Jan 29 11:14:40.516786 containerd[1449]: time="2025-01-29T11:14:40.516718422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:3,}" Jan 29 11:14:40.516873 containerd[1449]: time="2025-01-29T11:14:40.516857313Z" level=info msg="Ensure that sandbox 869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16 in task-service has been cleanup successfully" Jan 29 11:14:40.517177 containerd[1449]: time="2025-01-29T11:14:40.516998243Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:14:40.517177 containerd[1449]: time="2025-01-29T11:14:40.517066728Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:14:40.517177 containerd[1449]: time="2025-01-29T11:14:40.517075889Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:14:40.518045 containerd[1449]: time="2025-01-29T11:14:40.518008717Z" level=info msg="TearDown network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" successfully" Jan 29 11:14:40.518045 containerd[1449]: time="2025-01-29T11:14:40.518042119Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" returns successfully" Jan 29 11:14:40.518389 containerd[1449]: time="2025-01-29T11:14:40.518197611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:3,}" Jan 29 11:14:40.518594 containerd[1449]: time="2025-01-29T11:14:40.518572438Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:14:40.518714 containerd[1449]: time="2025-01-29T11:14:40.518698127Z" level=info msg="TearDown network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" successfully" Jan 29 11:14:40.518836 containerd[1449]: time="2025-01-29T11:14:40.518822857Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" returns successfully" Jan 29 11:14:40.519350 containerd[1449]: time="2025-01-29T11:14:40.519324613Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:14:40.519476 containerd[1449]: time="2025-01-29T11:14:40.519461183Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:14:40.519524 containerd[1449]: time="2025-01-29T11:14:40.519513267Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:14:40.519964 containerd[1449]: time="2025-01-29T11:14:40.519940538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:3,}" Jan 29 11:14:40.520404 kubelet[2617]: I0129 11:14:40.520382 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9" Jan 29 11:14:40.521249 containerd[1449]: time="2025-01-29T11:14:40.521141186Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" Jan 29 11:14:40.521649 containerd[1449]: time="2025-01-29T11:14:40.521623982Z" level=info msg="Ensure that sandbox 35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9 in task-service has been cleanup successfully" Jan 29 11:14:40.522125 containerd[1449]: time="2025-01-29T11:14:40.522100457Z" level=info msg="TearDown network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" successfully" Jan 29 11:14:40.522254 containerd[1449]: time="2025-01-29T11:14:40.522194064Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" returns successfully" Jan 29 11:14:40.526299 containerd[1449]: time="2025-01-29T11:14:40.526267082Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:14:40.526381 containerd[1449]: time="2025-01-29T11:14:40.526371610Z" level=info msg="TearDown network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" successfully" Jan 29 11:14:40.526406 containerd[1449]: time="2025-01-29T11:14:40.526383250Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" returns successfully" Jan 29 11:14:40.526707 kubelet[2617]: I0129 11:14:40.526681 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a" Jan 29 11:14:40.527522 containerd[1449]: time="2025-01-29T11:14:40.527123665Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" Jan 29 11:14:40.527522 containerd[1449]: time="2025-01-29T11:14:40.527272076Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:14:40.527522 containerd[1449]: time="2025-01-29T11:14:40.527307278Z" level=info msg="Ensure that sandbox 56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a in task-service has been cleanup successfully" Jan 29 11:14:40.527522 containerd[1449]: time="2025-01-29T11:14:40.527340881Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:14:40.527522 containerd[1449]: time="2025-01-29T11:14:40.527351081Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:14:40.527522 containerd[1449]: time="2025-01-29T11:14:40.527482091Z" level=info msg="TearDown network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" successfully" Jan 29 11:14:40.527723 containerd[1449]: time="2025-01-29T11:14:40.527501892Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" returns successfully" Jan 29 11:14:40.528018 containerd[1449]: time="2025-01-29T11:14:40.527985648Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:14:40.528074 containerd[1449]: time="2025-01-29T11:14:40.528019330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:14:40.528074 containerd[1449]: time="2025-01-29T11:14:40.528063614Z" level=info msg="TearDown network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" successfully" Jan 29 11:14:40.528113 containerd[1449]: time="2025-01-29T11:14:40.528073814Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" returns successfully" Jan 29 11:14:40.528483 containerd[1449]: time="2025-01-29T11:14:40.528460003Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:14:40.528647 containerd[1449]: time="2025-01-29T11:14:40.528629375Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:14:40.528715 containerd[1449]: time="2025-01-29T11:14:40.528702300Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:14:40.529365 containerd[1449]: time="2025-01-29T11:14:40.529341587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:14:40.529724 kubelet[2617]: I0129 11:14:40.529702 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2" Jan 29 11:14:40.530118 containerd[1449]: time="2025-01-29T11:14:40.530061720Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" Jan 29 11:14:40.530353 containerd[1449]: time="2025-01-29T11:14:40.530316059Z" level=info msg="Ensure that sandbox ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2 in task-service has been cleanup successfully" Jan 29 11:14:40.530692 containerd[1449]: time="2025-01-29T11:14:40.530616321Z" level=info msg="TearDown network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" successfully" Jan 29 11:14:40.530692 containerd[1449]: time="2025-01-29T11:14:40.530638162Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" returns successfully" Jan 29 11:14:40.531100 containerd[1449]: time="2025-01-29T11:14:40.530932224Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:14:40.531100 containerd[1449]: time="2025-01-29T11:14:40.531001789Z" level=info msg="TearDown network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" successfully" Jan 29 11:14:40.531100 containerd[1449]: time="2025-01-29T11:14:40.531010789Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" returns successfully" Jan 29 11:14:40.531592 containerd[1449]: time="2025-01-29T11:14:40.531562670Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:14:40.531747 containerd[1449]: time="2025-01-29T11:14:40.531730762Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:14:40.531862 containerd[1449]: time="2025-01-29T11:14:40.531799407Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:14:40.532102 kubelet[2617]: E0129 11:14:40.532079 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:40.541349 containerd[1449]: time="2025-01-29T11:14:40.541309824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:3,}" Jan 29 11:14:40.780221 systemd[1]: run-netns-cni\x2db8e9340a\x2dccad\x2da2ec\x2d4a2c\x2d12739d79340d.mount: Deactivated successfully. Jan 29 11:14:40.780353 systemd[1]: run-netns-cni\x2d518db07f\x2dc825\x2ddb32\x2d79f4\x2d082a394b3cd5.mount: Deactivated successfully. Jan 29 11:14:40.780402 systemd[1]: run-netns-cni\x2d90a48626\x2dfba6\x2dc3f8\x2dc262\x2d0ecda63f4f08.mount: Deactivated successfully. Jan 29 11:14:40.780450 systemd[1]: run-netns-cni\x2d5feae547\x2dc91b\x2dbfef\x2dadd7\x2d8c8ca1cc6d43.mount: Deactivated successfully. Jan 29 11:14:40.780501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9-shm.mount: Deactivated successfully. Jan 29 11:14:40.780571 systemd[1]: run-netns-cni\x2da4e84e6c\x2d20df\x2d8140\x2d9540\x2d12e48729b6e7.mount: Deactivated successfully. Jan 29 11:14:40.780631 systemd[1]: run-netns-cni\x2ddec4242d\x2d9ff8\x2dd06e\x2d3e0b\x2d85d3df2ac5ac.mount: Deactivated successfully. Jan 29 11:14:40.788795 containerd[1449]: time="2025-01-29T11:14:40.788744232Z" level=error msg="Failed to destroy network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.791547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c-shm.mount: Deactivated successfully. Jan 29 11:14:40.791685 containerd[1449]: time="2025-01-29T11:14:40.791639084Z" level=error msg="encountered an error cleaning up failed sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.792053 containerd[1449]: time="2025-01-29T11:14:40.791716809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.792418 kubelet[2617]: E0129 11:14:40.792374 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.792626 kubelet[2617]: E0129 11:14:40.792456 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:40.792626 kubelet[2617]: E0129 11:14:40.792478 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:40.792626 kubelet[2617]: E0129 11:14:40.792517 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlnjj" podUID="06aa249c-a866-428e-8d59-48acbc7fcd5e" Jan 29 11:14:40.809836 containerd[1449]: time="2025-01-29T11:14:40.809595559Z" level=error msg="Failed to destroy network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.812038 containerd[1449]: time="2025-01-29T11:14:40.811984014Z" level=error msg="Failed to destroy network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.813724 containerd[1449]: time="2025-01-29T11:14:40.812352081Z" level=error msg="encountered an error cleaning up failed sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.812915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350-shm.mount: Deactivated successfully. Jan 29 11:14:40.814035 containerd[1449]: time="2025-01-29T11:14:40.814003322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.814180 containerd[1449]: time="2025-01-29T11:14:40.812863079Z" level=error msg="encountered an error cleaning up failed sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.814405 kubelet[2617]: E0129 11:14:40.814366 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.814468 kubelet[2617]: E0129 11:14:40.814430 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:40.814502 kubelet[2617]: E0129 11:14:40.814469 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:40.814775 kubelet[2617]: E0129 11:14:40.814510 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:40.815321 containerd[1449]: time="2025-01-29T11:14:40.815188489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.815817 kubelet[2617]: E0129 11:14:40.815671 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.815817 kubelet[2617]: E0129 11:14:40.815715 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:40.815817 kubelet[2617]: E0129 11:14:40.815749 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:40.816056 kubelet[2617]: E0129 11:14:40.815780 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" podUID="a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7" Jan 29 11:14:40.816317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd-shm.mount: Deactivated successfully. Jan 29 11:14:40.832735 containerd[1449]: time="2025-01-29T11:14:40.832674410Z" level=error msg="Failed to destroy network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.833594 containerd[1449]: time="2025-01-29T11:14:40.832992313Z" level=error msg="encountered an error cleaning up failed sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.833594 containerd[1449]: time="2025-01-29T11:14:40.833063399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.834877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839-shm.mount: Deactivated successfully. Jan 29 11:14:40.835141 kubelet[2617]: E0129 11:14:40.834941 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.835141 kubelet[2617]: E0129 11:14:40.835000 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:40.835141 kubelet[2617]: E0129 11:14:40.835020 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:40.836475 kubelet[2617]: E0129 11:14:40.835298 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r4x6g" podUID="64991b07-942c-4246-a46f-4589ee4a9827" Jan 29 11:14:40.847258 containerd[1449]: time="2025-01-29T11:14:40.847210915Z" level=error msg="Failed to destroy network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.847614 containerd[1449]: time="2025-01-29T11:14:40.847531339Z" level=error msg="encountered an error cleaning up failed sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.847695 containerd[1449]: time="2025-01-29T11:14:40.847642667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.847843 kubelet[2617]: E0129 11:14:40.847812 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.847888 kubelet[2617]: E0129 11:14:40.847859 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:40.847888 kubelet[2617]: E0129 11:14:40.847877 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:40.847937 kubelet[2617]: E0129 11:14:40.847916 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" podUID="3ebd6aa9-d128-4a03-9b92-9b846f7c50c7" Jan 29 11:14:40.861324 containerd[1449]: time="2025-01-29T11:14:40.861265385Z" level=error msg="Failed to destroy network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.861898 containerd[1449]: time="2025-01-29T11:14:40.861858228Z" level=error msg="encountered an error cleaning up failed sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.861947 containerd[1449]: time="2025-01-29T11:14:40.861922473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.862202 kubelet[2617]: E0129 11:14:40.862171 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:40.862245 kubelet[2617]: E0129 11:14:40.862221 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:40.862270 kubelet[2617]: E0129 11:14:40.862252 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:40.862314 kubelet[2617]: E0129 11:14:40.862291 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" podUID="5db914b0-6a91-420a-9300-e102983010e9" Jan 29 11:14:40.929922 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:58582.service - OpenSSH per-connection server daemon (10.0.0.1:58582). Jan 29 11:14:40.980721 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 58582 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:40.982092 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:40.987331 systemd-logind[1426]: New session 9 of user core. Jan 29 11:14:40.991689 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:14:41.120752 sshd[4298]: Connection closed by 10.0.0.1 port 58582 Jan 29 11:14:41.121329 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:41.125897 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:58582.service: Deactivated successfully. Jan 29 11:14:41.128665 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:14:41.131907 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:14:41.133173 systemd-logind[1426]: Removed session 9. Jan 29 11:14:41.535588 kubelet[2617]: I0129 11:14:41.534502 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd" Jan 29 11:14:41.535931 containerd[1449]: time="2025-01-29T11:14:41.535011553Z" level=info msg="StopPodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\"" Jan 29 11:14:41.535931 containerd[1449]: time="2025-01-29T11:14:41.535167204Z" level=info msg="Ensure that sandbox b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd in task-service has been cleanup successfully" Jan 29 11:14:41.535931 containerd[1449]: time="2025-01-29T11:14:41.535364938Z" level=info msg="TearDown network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" successfully" Jan 29 11:14:41.535931 containerd[1449]: time="2025-01-29T11:14:41.535391980Z" level=info msg="StopPodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" returns successfully" Jan 29 11:14:41.536258 containerd[1449]: time="2025-01-29T11:14:41.536221439Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" Jan 29 11:14:41.536362 containerd[1449]: time="2025-01-29T11:14:41.536307925Z" level=info msg="TearDown network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" successfully" Jan 29 11:14:41.536362 containerd[1449]: time="2025-01-29T11:14:41.536318886Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" returns successfully" Jan 29 11:14:41.536691 containerd[1449]: time="2025-01-29T11:14:41.536658550Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:14:41.536748 containerd[1449]: time="2025-01-29T11:14:41.536736475Z" level=info msg="TearDown network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" successfully" Jan 29 11:14:41.536748 containerd[1449]: time="2025-01-29T11:14:41.536745596Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" returns successfully" Jan 29 11:14:41.538836 containerd[1449]: time="2025-01-29T11:14:41.538810702Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:14:41.538904 containerd[1449]: time="2025-01-29T11:14:41.538890828Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:14:41.538930 containerd[1449]: time="2025-01-29T11:14:41.538903029Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:14:41.540245 containerd[1449]: time="2025-01-29T11:14:41.539933262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:14:41.540520 kubelet[2617]: I0129 11:14:41.540496 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749" Jan 29 11:14:41.541687 containerd[1449]: time="2025-01-29T11:14:41.541664705Z" level=info msg="StopPodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\"" Jan 29 11:14:41.542265 containerd[1449]: time="2025-01-29T11:14:41.542084775Z" level=info msg="Ensure that sandbox 99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749 in task-service has been cleanup successfully" Jan 29 11:14:41.542411 containerd[1449]: time="2025-01-29T11:14:41.542387116Z" level=info msg="TearDown network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" successfully" Jan 29 11:14:41.542474 containerd[1449]: time="2025-01-29T11:14:41.542461522Z" level=info msg="StopPodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" returns successfully" Jan 29 11:14:41.543096 containerd[1449]: time="2025-01-29T11:14:41.542830748Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" Jan 29 11:14:41.543096 containerd[1449]: time="2025-01-29T11:14:41.542900233Z" level=info msg="TearDown network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" successfully" Jan 29 11:14:41.543096 containerd[1449]: time="2025-01-29T11:14:41.542909673Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" returns successfully" Jan 29 11:14:41.543861 containerd[1449]: time="2025-01-29T11:14:41.543810457Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:14:41.543931 containerd[1449]: time="2025-01-29T11:14:41.543901784Z" level=info msg="TearDown network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" successfully" Jan 29 11:14:41.543931 containerd[1449]: time="2025-01-29T11:14:41.543911944Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" returns successfully" Jan 29 11:14:41.544107 kubelet[2617]: I0129 11:14:41.544086 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839" Jan 29 11:14:41.545106 containerd[1449]: time="2025-01-29T11:14:41.544915496Z" level=info msg="StopPodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\"" Jan 29 11:14:41.545226 containerd[1449]: time="2025-01-29T11:14:41.544923296Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:14:41.545367 containerd[1449]: time="2025-01-29T11:14:41.545348486Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:14:41.545430 containerd[1449]: time="2025-01-29T11:14:41.545416331Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:14:41.545522 containerd[1449]: time="2025-01-29T11:14:41.545211677Z" level=info msg="Ensure that sandbox 29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839 in task-service has been cleanup successfully" Jan 29 11:14:41.545763 containerd[1449]: time="2025-01-29T11:14:41.545743314Z" level=info msg="TearDown network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" successfully" Jan 29 11:14:41.545886 containerd[1449]: time="2025-01-29T11:14:41.545819200Z" level=info msg="StopPodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" returns successfully" Jan 29 11:14:41.546737 containerd[1449]: time="2025-01-29T11:14:41.546701542Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" Jan 29 11:14:41.547184 containerd[1449]: time="2025-01-29T11:14:41.546981682Z" level=info msg="TearDown network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" successfully" Jan 29 11:14:41.547184 containerd[1449]: time="2025-01-29T11:14:41.547000684Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" returns successfully" Jan 29 11:14:41.547184 containerd[1449]: time="2025-01-29T11:14:41.547096930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:14:41.547662 containerd[1449]: time="2025-01-29T11:14:41.547641449Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:14:41.548030 containerd[1449]: time="2025-01-29T11:14:41.547951391Z" level=info msg="TearDown network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" successfully" Jan 29 11:14:41.548030 containerd[1449]: time="2025-01-29T11:14:41.547971192Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" returns successfully" Jan 29 11:14:41.548318 containerd[1449]: time="2025-01-29T11:14:41.548232371Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:14:41.548409 containerd[1449]: time="2025-01-29T11:14:41.548397623Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:14:41.548499 containerd[1449]: time="2025-01-29T11:14:41.548412304Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:14:41.549455 kubelet[2617]: I0129 11:14:41.549067 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c" Jan 29 11:14:41.549727 kubelet[2617]: E0129 11:14:41.549702 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:41.549891 containerd[1449]: time="2025-01-29T11:14:41.549855446Z" level=info msg="StopPodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\"" Jan 29 11:14:41.550151 containerd[1449]: time="2025-01-29T11:14:41.550123505Z" level=info msg="Ensure that sandbox d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c in task-service has been cleanup successfully" Jan 29 11:14:41.550322 containerd[1449]: time="2025-01-29T11:14:41.550131706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:4,}" Jan 29 11:14:41.550562 containerd[1449]: time="2025-01-29T11:14:41.550513973Z" level=info msg="TearDown network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" successfully" Jan 29 11:14:41.550562 containerd[1449]: time="2025-01-29T11:14:41.550559616Z" level=info msg="StopPodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" returns successfully" Jan 29 11:14:41.550940 containerd[1449]: time="2025-01-29T11:14:41.550849317Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" Jan 29 11:14:41.550940 containerd[1449]: time="2025-01-29T11:14:41.550937843Z" level=info msg="TearDown network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" successfully" Jan 29 11:14:41.551016 containerd[1449]: time="2025-01-29T11:14:41.550950084Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" returns successfully" Jan 29 11:14:41.551570 containerd[1449]: time="2025-01-29T11:14:41.551485002Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:14:41.552567 kubelet[2617]: I0129 11:14:41.552397 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c" Jan 29 11:14:41.553076 containerd[1449]: time="2025-01-29T11:14:41.553043872Z" level=info msg="StopPodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\"" Jan 29 11:14:41.553216 containerd[1449]: time="2025-01-29T11:14:41.553189083Z" level=info msg="Ensure that sandbox 79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c in task-service has been cleanup successfully" Jan 29 11:14:41.553360 containerd[1449]: time="2025-01-29T11:14:41.553342294Z" level=info msg="TearDown network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" successfully" Jan 29 11:14:41.553406 containerd[1449]: time="2025-01-29T11:14:41.553359575Z" level=info msg="StopPodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" returns successfully" Jan 29 11:14:41.553606 containerd[1449]: time="2025-01-29T11:14:41.553587751Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" Jan 29 11:14:41.553675 containerd[1449]: time="2025-01-29T11:14:41.553659396Z" level=info msg="TearDown network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" successfully" Jan 29 11:14:41.553675 containerd[1449]: time="2025-01-29T11:14:41.553672477Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" returns successfully" Jan 29 11:14:41.553852 containerd[1449]: time="2025-01-29T11:14:41.553835409Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:14:41.553906 containerd[1449]: time="2025-01-29T11:14:41.553891853Z" level=info msg="TearDown network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" successfully" Jan 29 11:14:41.553934 containerd[1449]: time="2025-01-29T11:14:41.553905694Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" returns successfully" Jan 29 11:14:41.554299 containerd[1449]: time="2025-01-29T11:14:41.554266599Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:14:41.554373 containerd[1449]: time="2025-01-29T11:14:41.554356726Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:14:41.554373 containerd[1449]: time="2025-01-29T11:14:41.554370567Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:14:41.554867 kubelet[2617]: E0129 11:14:41.554679 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:41.555386 containerd[1449]: time="2025-01-29T11:14:41.555348276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:4,}" Jan 29 11:14:41.556061 kubelet[2617]: I0129 11:14:41.555934 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350" Jan 29 11:14:41.556419 containerd[1449]: time="2025-01-29T11:14:41.556386470Z" level=info msg="StopPodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\"" Jan 29 11:14:41.556595 containerd[1449]: time="2025-01-29T11:14:41.556574963Z" level=info msg="Ensure that sandbox 97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350 in task-service has been cleanup successfully" Jan 29 11:14:41.556874 containerd[1449]: time="2025-01-29T11:14:41.556748215Z" level=info msg="TearDown network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" successfully" Jan 29 11:14:41.556874 containerd[1449]: time="2025-01-29T11:14:41.556769697Z" level=info msg="StopPodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" returns successfully" Jan 29 11:14:41.557291 containerd[1449]: time="2025-01-29T11:14:41.557112841Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" Jan 29 11:14:41.557291 containerd[1449]: time="2025-01-29T11:14:41.557204568Z" level=info msg="TearDown network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" successfully" Jan 29 11:14:41.557291 containerd[1449]: time="2025-01-29T11:14:41.557216569Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" returns successfully" Jan 29 11:14:41.558161 containerd[1449]: time="2025-01-29T11:14:41.557467546Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:14:41.558161 containerd[1449]: time="2025-01-29T11:14:41.557696203Z" level=info msg="TearDown network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" successfully" Jan 29 11:14:41.558161 containerd[1449]: time="2025-01-29T11:14:41.557711964Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" returns successfully" Jan 29 11:14:41.558161 containerd[1449]: time="2025-01-29T11:14:41.557979703Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:14:41.558161 containerd[1449]: time="2025-01-29T11:14:41.558082230Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:14:41.558161 containerd[1449]: time="2025-01-29T11:14:41.558092191Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:14:41.558639 containerd[1449]: time="2025-01-29T11:14:41.558503940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:4,}" Jan 29 11:14:41.654361 containerd[1449]: time="2025-01-29T11:14:41.653435116Z" level=info msg="TearDown network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" successfully" Jan 29 11:14:41.654361 containerd[1449]: time="2025-01-29T11:14:41.653495160Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" returns successfully" Jan 29 11:14:41.655243 containerd[1449]: time="2025-01-29T11:14:41.655205081Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:14:41.655333 containerd[1449]: time="2025-01-29T11:14:41.655315969Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:14:41.655362 containerd[1449]: time="2025-01-29T11:14:41.655329770Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:14:41.655908 containerd[1449]: time="2025-01-29T11:14:41.655862608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:4,}" Jan 29 11:14:41.786521 systemd[1]: run-netns-cni\x2d8b1ce1a3\x2d8f09\x2d1dd9\x2d3ae2\x2d0a9a4d15b260.mount: Deactivated successfully. Jan 29 11:14:41.787291 systemd[1]: run-netns-cni\x2dc036b57a\x2dbbac\x2ddf1f\x2dd1a6\x2d7f43400b8b7a.mount: Deactivated successfully. Jan 29 11:14:41.787365 systemd[1]: run-netns-cni\x2d4af6aee6\x2db7f4\x2de543\x2db720\x2dd1847b48e5a2.mount: Deactivated successfully. Jan 29 11:14:41.787417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749-shm.mount: Deactivated successfully. Jan 29 11:14:41.787468 systemd[1]: run-netns-cni\x2d30e379dd\x2dc131\x2d5121\x2df391\x2d65217e5625ac.mount: Deactivated successfully. Jan 29 11:14:41.787515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c-shm.mount: Deactivated successfully. Jan 29 11:14:41.789045 systemd[1]: run-netns-cni\x2d83ec105c\x2d0f47\x2ddcb8\x2d540c\x2dbb24a5b41309.mount: Deactivated successfully. Jan 29 11:14:41.789107 systemd[1]: run-netns-cni\x2daebc3097\x2df9bd\x2d2e11\x2d4ac6\x2d950d600cc148.mount: Deactivated successfully. Jan 29 11:14:41.800207 containerd[1449]: time="2025-01-29T11:14:41.800158167Z" level=error msg="Failed to destroy network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.800575 containerd[1449]: time="2025-01-29T11:14:41.800530553Z" level=error msg="encountered an error cleaning up failed sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.800645 containerd[1449]: time="2025-01-29T11:14:41.800621480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.803950 kubelet[2617]: E0129 11:14:41.803912 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.804392 kubelet[2617]: E0129 11:14:41.804098 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:41.804392 kubelet[2617]: E0129 11:14:41.804124 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" Jan 29 11:14:41.804392 kubelet[2617]: E0129 11:14:41.804169 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-mfrdw_calico-apiserver(a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" podUID="a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7" Jan 29 11:14:41.804338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a-shm.mount: Deactivated successfully. Jan 29 11:14:41.804762 containerd[1449]: time="2025-01-29T11:14:41.804723091Z" level=error msg="Failed to destroy network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.806217 containerd[1449]: time="2025-01-29T11:14:41.806178834Z" level=error msg="encountered an error cleaning up failed sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.806461 containerd[1449]: time="2025-01-29T11:14:41.806409611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.806862 kubelet[2617]: E0129 11:14:41.806723 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.806862 kubelet[2617]: E0129 11:14:41.806769 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:41.807457 kubelet[2617]: E0129 11:14:41.806791 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlnjj" Jan 29 11:14:41.807457 kubelet[2617]: E0129 11:14:41.807117 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wlnjj_kube-system(06aa249c-a866-428e-8d59-48acbc7fcd5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlnjj" podUID="06aa249c-a866-428e-8d59-48acbc7fcd5e" Jan 29 11:14:41.808495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9-shm.mount: Deactivated successfully. Jan 29 11:14:41.809212 containerd[1449]: time="2025-01-29T11:14:41.809176407Z" level=error msg="Failed to destroy network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.810798 containerd[1449]: time="2025-01-29T11:14:41.810750038Z" level=error msg="encountered an error cleaning up failed sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.810857 containerd[1449]: time="2025-01-29T11:14:41.810823644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.812009 kubelet[2617]: E0129 11:14:41.810989 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.812009 kubelet[2617]: E0129 11:14:41.811044 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:41.813220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847-shm.mount: Deactivated successfully. Jan 29 11:14:41.819296 kubelet[2617]: E0129 11:14:41.811060 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjm8h" Jan 29 11:14:41.819296 kubelet[2617]: E0129 11:14:41.819189 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjm8h_calico-system(51246db2-a0a0-40ce-bf4c-e10522a304db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjm8h" podUID="51246db2-a0a0-40ce-bf4c-e10522a304db" Jan 29 11:14:41.823696 containerd[1449]: time="2025-01-29T11:14:41.823651074Z" level=error msg="Failed to destroy network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.825454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749-shm.mount: Deactivated successfully. Jan 29 11:14:41.826522 containerd[1449]: time="2025-01-29T11:14:41.826224297Z" level=error msg="encountered an error cleaning up failed sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.826522 containerd[1449]: time="2025-01-29T11:14:41.826287901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.826841 kubelet[2617]: E0129 11:14:41.826811 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.826992 kubelet[2617]: E0129 11:14:41.826936 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:41.826992 kubelet[2617]: E0129 11:14:41.826964 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" Jan 29 11:14:41.827056 kubelet[2617]: E0129 11:14:41.827016 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc6dd774d-6k2vx_calico-apiserver(5db914b0-6a91-420a-9300-e102983010e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" podUID="5db914b0-6a91-420a-9300-e102983010e9" Jan 29 11:14:41.850751 containerd[1449]: time="2025-01-29T11:14:41.850694953Z" level=error msg="Failed to destroy network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.851091 containerd[1449]: time="2025-01-29T11:14:41.851043298Z" level=error msg="encountered an error cleaning up failed sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.851145 containerd[1449]: time="2025-01-29T11:14:41.851124983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.851367 kubelet[2617]: E0129 11:14:41.851333 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.851456 kubelet[2617]: E0129 11:14:41.851392 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:41.851656 kubelet[2617]: E0129 11:14:41.851632 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r4x6g" Jan 29 11:14:41.851709 kubelet[2617]: E0129 11:14:41.851686 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r4x6g_kube-system(64991b07-942c-4246-a46f-4589ee4a9827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r4x6g" podUID="64991b07-942c-4246-a46f-4589ee4a9827" Jan 29 11:14:41.857590 containerd[1449]: time="2025-01-29T11:14:41.857555320Z" level=error msg="Failed to destroy network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.857865 containerd[1449]: time="2025-01-29T11:14:41.857836780Z" level=error msg="encountered an error cleaning up failed sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.857922 containerd[1449]: time="2025-01-29T11:14:41.857894544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.858385 kubelet[2617]: E0129 11:14:41.858081 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:14:41.858385 kubelet[2617]: E0129 11:14:41.858134 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:41.858385 kubelet[2617]: E0129 11:14:41.858152 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" Jan 29 11:14:41.858527 kubelet[2617]: E0129 11:14:41.858194 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67779b498c-2wfqf_calico-system(3ebd6aa9-d128-4a03-9b92-9b846f7c50c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" podUID="3ebd6aa9-d128-4a03-9b92-9b846f7c50c7" Jan 29 11:14:42.014500 containerd[1449]: time="2025-01-29T11:14:42.014430182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.015270 containerd[1449]: time="2025-01-29T11:14:42.015232757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 11:14:42.016957 containerd[1449]: time="2025-01-29T11:14:42.016919713Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.019895 containerd[1449]: time="2025-01-29T11:14:42.019853275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.020599 containerd[1449]: time="2025-01-29T11:14:42.020565604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.541089983s" Jan 29 11:14:42.020646 containerd[1449]: time="2025-01-29T11:14:42.020598486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 11:14:42.027629 containerd[1449]: time="2025-01-29T11:14:42.027452397Z" level=info msg="CreateContainer within sandbox \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:14:42.073399 containerd[1449]: time="2025-01-29T11:14:42.073214186Z" level=info msg="CreateContainer within sandbox \"d93525f00ea91d8a2234c005c8732b20326a49f704d5a46d105ac4af75e25558\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"613f8dc3358ea4b14f083e35ddba8f7f0f507eb338ab1af1e3cf2a10e110eb63\"" Jan 29 11:14:42.075217 containerd[1449]: time="2025-01-29T11:14:42.075187281Z" level=info msg="StartContainer for \"613f8dc3358ea4b14f083e35ddba8f7f0f507eb338ab1af1e3cf2a10e110eb63\"" Jan 29 11:14:42.127729 systemd[1]: Started cri-containerd-613f8dc3358ea4b14f083e35ddba8f7f0f507eb338ab1af1e3cf2a10e110eb63.scope - libcontainer container 613f8dc3358ea4b14f083e35ddba8f7f0f507eb338ab1af1e3cf2a10e110eb63. Jan 29 11:14:42.160768 containerd[1449]: time="2025-01-29T11:14:42.160651161Z" level=info msg="StartContainer for \"613f8dc3358ea4b14f083e35ddba8f7f0f507eb338ab1af1e3cf2a10e110eb63\" returns successfully" Jan 29 11:14:42.339071 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:14:42.339236 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:14:42.560177 kubelet[2617]: I0129 11:14:42.560129 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9" Jan 29 11:14:42.560930 containerd[1449]: time="2025-01-29T11:14:42.560867614Z" level=info msg="StopPodSandbox for \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\"" Jan 29 11:14:42.561134 containerd[1449]: time="2025-01-29T11:14:42.561043186Z" level=info msg="Ensure that sandbox a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9 in task-service has been cleanup successfully" Jan 29 11:14:42.561233 containerd[1449]: time="2025-01-29T11:14:42.561210558Z" level=info msg="TearDown network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\" successfully" Jan 29 11:14:42.561233 containerd[1449]: time="2025-01-29T11:14:42.561227439Z" level=info msg="StopPodSandbox for \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\" returns successfully" Jan 29 11:14:42.561635 containerd[1449]: time="2025-01-29T11:14:42.561601865Z" level=info msg="StopPodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\"" Jan 29 11:14:42.561691 containerd[1449]: time="2025-01-29T11:14:42.561677750Z" level=info msg="TearDown network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" successfully" Jan 29 11:14:42.561691 containerd[1449]: time="2025-01-29T11:14:42.561687631Z" level=info msg="StopPodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" returns successfully" Jan 29 11:14:42.562094 containerd[1449]: time="2025-01-29T11:14:42.562067817Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" Jan 29 11:14:42.562169 containerd[1449]: time="2025-01-29T11:14:42.562146022Z" level=info msg="TearDown network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" successfully" Jan 29 11:14:42.562169 containerd[1449]: time="2025-01-29T11:14:42.562160583Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" returns successfully" Jan 29 11:14:42.562640 containerd[1449]: time="2025-01-29T11:14:42.562599933Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:14:42.562739 containerd[1449]: time="2025-01-29T11:14:42.562709701Z" level=info msg="TearDown network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" successfully" Jan 29 11:14:42.562739 containerd[1449]: time="2025-01-29T11:14:42.562722662Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" returns successfully" Jan 29 11:14:42.563329 containerd[1449]: time="2025-01-29T11:14:42.563294421Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:14:42.563414 containerd[1449]: time="2025-01-29T11:14:42.563389828Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:14:42.563414 containerd[1449]: time="2025-01-29T11:14:42.563407349Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:14:42.563460 kubelet[2617]: I0129 11:14:42.563416 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847" Jan 29 11:14:42.563617 kubelet[2617]: E0129 11:14:42.563587 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:42.564306 containerd[1449]: time="2025-01-29T11:14:42.564284009Z" level=info msg="StopPodSandbox for \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\"" Jan 29 11:14:42.564461 containerd[1449]: time="2025-01-29T11:14:42.564441500Z" level=info msg="Ensure that sandbox 05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847 in task-service has been cleanup successfully" Jan 29 11:14:42.564693 containerd[1449]: time="2025-01-29T11:14:42.564673876Z" level=info msg="TearDown network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\" successfully" Jan 29 11:14:42.564729 containerd[1449]: time="2025-01-29T11:14:42.564692597Z" level=info msg="StopPodSandbox for \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\" returns successfully" Jan 29 11:14:42.565242 containerd[1449]: time="2025-01-29T11:14:42.565215313Z" level=info msg="StopPodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\"" Jan 29 11:14:42.565396 containerd[1449]: time="2025-01-29T11:14:42.565377124Z" level=info msg="TearDown network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" successfully" Jan 29 11:14:42.565396 containerd[1449]: time="2025-01-29T11:14:42.565393405Z" level=info msg="StopPodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" returns successfully" Jan 29 11:14:42.565890 containerd[1449]: time="2025-01-29T11:14:42.565699186Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" Jan 29 11:14:42.565890 containerd[1449]: time="2025-01-29T11:14:42.565776752Z" level=info msg="TearDown network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" successfully" Jan 29 11:14:42.565890 containerd[1449]: time="2025-01-29T11:14:42.565787313Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" returns successfully" Jan 29 11:14:42.566099 containerd[1449]: time="2025-01-29T11:14:42.566054851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:5,}" Jan 29 11:14:42.568873 containerd[1449]: time="2025-01-29T11:14:42.568816321Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:14:42.568966 containerd[1449]: time="2025-01-29T11:14:42.568947850Z" level=info msg="TearDown network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" successfully" Jan 29 11:14:42.568966 containerd[1449]: time="2025-01-29T11:14:42.568961331Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" returns successfully" Jan 29 11:14:42.569928 containerd[1449]: time="2025-01-29T11:14:42.569696661Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:14:42.570184 containerd[1449]: time="2025-01-29T11:14:42.570104570Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:14:42.570184 containerd[1449]: time="2025-01-29T11:14:42.570127451Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:14:42.572422 containerd[1449]: time="2025-01-29T11:14:42.571838729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:5,}" Jan 29 11:14:42.578723 kubelet[2617]: E0129 11:14:42.578416 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:42.581272 kubelet[2617]: I0129 11:14:42.581024 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a" Jan 29 11:14:42.582633 containerd[1449]: time="2025-01-29T11:14:42.582598109Z" level=info msg="StopPodSandbox for \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\"" Jan 29 11:14:42.583456 containerd[1449]: time="2025-01-29T11:14:42.582754200Z" level=info msg="Ensure that sandbox b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a in task-service has been cleanup successfully" Jan 29 11:14:42.583643 containerd[1449]: time="2025-01-29T11:14:42.583502371Z" level=info msg="TearDown network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\" successfully" Jan 29 11:14:42.583832 containerd[1449]: time="2025-01-29T11:14:42.583528333Z" level=info msg="StopPodSandbox for \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\" returns successfully" Jan 29 11:14:42.584282 containerd[1449]: time="2025-01-29T11:14:42.584257143Z" level=info msg="StopPodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\"" Jan 29 11:14:42.585191 containerd[1449]: time="2025-01-29T11:14:42.585074039Z" level=info msg="TearDown network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" successfully" Jan 29 11:14:42.585191 containerd[1449]: time="2025-01-29T11:14:42.585097401Z" level=info msg="StopPodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" returns successfully" Jan 29 11:14:42.585661 containerd[1449]: time="2025-01-29T11:14:42.585491708Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" Jan 29 11:14:42.585661 containerd[1449]: time="2025-01-29T11:14:42.585603636Z" level=info msg="TearDown network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" successfully" Jan 29 11:14:42.585661 containerd[1449]: time="2025-01-29T11:14:42.585616237Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" returns successfully" Jan 29 11:14:42.586267 containerd[1449]: time="2025-01-29T11:14:42.586146233Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:14:42.586580 containerd[1449]: time="2025-01-29T11:14:42.586441133Z" level=info msg="TearDown network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" successfully" Jan 29 11:14:42.586580 containerd[1449]: time="2025-01-29T11:14:42.586463135Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" returns successfully" Jan 29 11:14:42.587182 containerd[1449]: time="2025-01-29T11:14:42.587033454Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:14:42.587182 containerd[1449]: time="2025-01-29T11:14:42.587131901Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:14:42.587182 containerd[1449]: time="2025-01-29T11:14:42.587141862Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:14:42.588361 kubelet[2617]: I0129 11:14:42.587589 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749" Jan 29 11:14:42.588431 containerd[1449]: time="2025-01-29T11:14:42.587639496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:14:42.588431 containerd[1449]: time="2025-01-29T11:14:42.588065605Z" level=info msg="StopPodSandbox for \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\"" Jan 29 11:14:42.588431 containerd[1449]: time="2025-01-29T11:14:42.588196014Z" level=info msg="Ensure that sandbox 950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749 in task-service has been cleanup successfully" Jan 29 11:14:42.589083 containerd[1449]: time="2025-01-29T11:14:42.588965787Z" level=info msg="TearDown network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\" successfully" Jan 29 11:14:42.589083 containerd[1449]: time="2025-01-29T11:14:42.589079395Z" level=info msg="StopPodSandbox for \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\" returns successfully" Jan 29 11:14:42.589738 containerd[1449]: time="2025-01-29T11:14:42.589611912Z" level=info msg="StopPodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\"" Jan 29 11:14:42.589738 containerd[1449]: time="2025-01-29T11:14:42.589686717Z" level=info msg="TearDown network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" successfully" Jan 29 11:14:42.589738 containerd[1449]: time="2025-01-29T11:14:42.589696837Z" level=info msg="StopPodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" returns successfully" Jan 29 11:14:42.590467 containerd[1449]: time="2025-01-29T11:14:42.590364083Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" Jan 29 11:14:42.591331 containerd[1449]: time="2025-01-29T11:14:42.591058291Z" level=info msg="TearDown network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" successfully" Jan 29 11:14:42.591331 containerd[1449]: time="2025-01-29T11:14:42.591078372Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" returns successfully" Jan 29 11:14:42.591462 kubelet[2617]: I0129 11:14:42.591098 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252" Jan 29 11:14:42.592164 containerd[1449]: time="2025-01-29T11:14:42.592055280Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:14:42.592409 containerd[1449]: time="2025-01-29T11:14:42.592336539Z" level=info msg="TearDown network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" successfully" Jan 29 11:14:42.593560 containerd[1449]: time="2025-01-29T11:14:42.592361861Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" returns successfully" Jan 29 11:14:42.593560 containerd[1449]: time="2025-01-29T11:14:42.592862975Z" level=info msg="StopPodSandbox for \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\"" Jan 29 11:14:42.593560 containerd[1449]: time="2025-01-29T11:14:42.593452256Z" level=info msg="Ensure that sandbox 7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252 in task-service has been cleanup successfully" Jan 29 11:14:42.593908 containerd[1449]: time="2025-01-29T11:14:42.593884566Z" level=info msg="TearDown network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\" successfully" Jan 29 11:14:42.593986 containerd[1449]: time="2025-01-29T11:14:42.593972452Z" level=info msg="StopPodSandbox for \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\" returns successfully" Jan 29 11:14:42.594416 containerd[1449]: time="2025-01-29T11:14:42.594398521Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:14:42.594642 containerd[1449]: time="2025-01-29T11:14:42.594481127Z" level=info msg="StopPodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\"" Jan 29 11:14:42.594642 containerd[1449]: time="2025-01-29T11:14:42.594564612Z" level=info msg="TearDown network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" successfully" Jan 29 11:14:42.594642 containerd[1449]: time="2025-01-29T11:14:42.594574133Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:14:42.594642 containerd[1449]: time="2025-01-29T11:14:42.594595174Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:14:42.594642 containerd[1449]: time="2025-01-29T11:14:42.594576413Z" level=info msg="StopPodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" returns successfully" Jan 29 11:14:42.595044 containerd[1449]: time="2025-01-29T11:14:42.595023644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:14:42.595079 containerd[1449]: time="2025-01-29T11:14:42.595061446Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" Jan 29 11:14:42.595196 containerd[1449]: time="2025-01-29T11:14:42.595176934Z" level=info msg="TearDown network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" successfully" Jan 29 11:14:42.595221 containerd[1449]: time="2025-01-29T11:14:42.595196056Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" returns successfully" Jan 29 11:14:42.596029 containerd[1449]: time="2025-01-29T11:14:42.595980710Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:14:42.596103 containerd[1449]: time="2025-01-29T11:14:42.596068996Z" level=info msg="TearDown network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" successfully" Jan 29 11:14:42.596103 containerd[1449]: time="2025-01-29T11:14:42.596087637Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" returns successfully" Jan 29 11:14:42.596672 kubelet[2617]: I0129 11:14:42.596314 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd" Jan 29 11:14:42.596726 containerd[1449]: time="2025-01-29T11:14:42.596619834Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:14:42.596726 containerd[1449]: time="2025-01-29T11:14:42.596713160Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:14:42.596768 containerd[1449]: time="2025-01-29T11:14:42.596730441Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:14:42.597083 containerd[1449]: time="2025-01-29T11:14:42.596845169Z" level=info msg="StopPodSandbox for \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\"" Jan 29 11:14:42.597204 kubelet[2617]: E0129 11:14:42.597013 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:42.597475 containerd[1449]: time="2025-01-29T11:14:42.597099187Z" level=info msg="Ensure that sandbox 322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd in task-service has been cleanup successfully" Jan 29 11:14:42.598395 containerd[1449]: time="2025-01-29T11:14:42.598239825Z" level=info msg="TearDown network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\" successfully" Jan 29 11:14:42.598395 containerd[1449]: time="2025-01-29T11:14:42.598272987Z" level=info msg="StopPodSandbox for \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\" returns successfully" Jan 29 11:14:42.598395 containerd[1449]: time="2025-01-29T11:14:42.598332552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:5,}" Jan 29 11:14:42.603242 kubelet[2617]: I0129 11:14:42.603059 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9dnh8" podStartSLOduration=1.750977067 podStartE2EDuration="13.603044916s" podCreationTimestamp="2025-01-29 11:14:29 +0000 UTC" firstStartedPulling="2025-01-29 11:14:30.169146839 +0000 UTC m=+22.848715627" lastFinishedPulling="2025-01-29 11:14:42.021214688 +0000 UTC m=+34.700783476" observedRunningTime="2025-01-29 11:14:42.598144019 +0000 UTC m=+35.277712807" watchObservedRunningTime="2025-01-29 11:14:42.603044916 +0000 UTC m=+35.282613704" Jan 29 11:14:42.604019 containerd[1449]: time="2025-01-29T11:14:42.603985820Z" level=info msg="StopPodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\"" Jan 29 11:14:42.604095 containerd[1449]: time="2025-01-29T11:14:42.604079187Z" level=info msg="TearDown network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" successfully" Jan 29 11:14:42.604095 containerd[1449]: time="2025-01-29T11:14:42.604093188Z" level=info msg="StopPodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" returns successfully" Jan 29 11:14:42.604400 containerd[1449]: time="2025-01-29T11:14:42.604374887Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" Jan 29 11:14:42.604741 containerd[1449]: time="2025-01-29T11:14:42.604525338Z" level=info msg="TearDown network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" successfully" Jan 29 11:14:42.604741 containerd[1449]: time="2025-01-29T11:14:42.604556340Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" returns successfully" Jan 29 11:14:42.604969 containerd[1449]: time="2025-01-29T11:14:42.604944286Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:14:42.605156 containerd[1449]: time="2025-01-29T11:14:42.605125219Z" level=info msg="TearDown network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" successfully" Jan 29 11:14:42.605375 containerd[1449]: time="2025-01-29T11:14:42.605314472Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" returns successfully" Jan 29 11:14:42.605959 containerd[1449]: time="2025-01-29T11:14:42.605894432Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:14:42.606165 containerd[1449]: time="2025-01-29T11:14:42.606016880Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:14:42.606165 containerd[1449]: time="2025-01-29T11:14:42.606028001Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:14:42.606702 containerd[1449]: time="2025-01-29T11:14:42.606641763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:5,}" Jan 29 11:14:42.787476 systemd[1]: run-netns-cni\x2d03a8d15e\x2dc8aa\x2dd2d9\x2db21f\x2d189bd2448064.mount: Deactivated successfully. Jan 29 11:14:42.787583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd-shm.mount: Deactivated successfully. Jan 29 11:14:42.787638 systemd[1]: run-netns-cni\x2d07eccb42\x2d9a8b\x2d5623\x2d6398\x2de9672ccae750.mount: Deactivated successfully. Jan 29 11:14:42.787689 systemd[1]: run-netns-cni\x2d26ab779a\x2d31e0\x2d4f3d\x2d1612\x2ddd9e663b60d3.mount: Deactivated successfully. Jan 29 11:14:42.787731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252-shm.mount: Deactivated successfully. Jan 29 11:14:42.787777 systemd[1]: run-netns-cni\x2ddbd2f8e1\x2d3ffe\x2db5b4\x2d40a7\x2d97bdc17d949e.mount: Deactivated successfully. Jan 29 11:14:42.787819 systemd[1]: run-netns-cni\x2d4ab3bbf7\x2de57f\x2dd228\x2d9d1d\x2d19a5994991c4.mount: Deactivated successfully. Jan 29 11:14:42.787859 systemd[1]: run-netns-cni\x2df957835f\x2da959\x2d99ff\x2deed5\x2da26d80fb8caf.mount: Deactivated successfully. Jan 29 11:14:42.787901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561763482.mount: Deactivated successfully. Jan 29 11:14:43.083612 systemd-networkd[1385]: cali02225741ce9: Link UP Jan 29 11:14:43.084161 systemd-networkd[1385]: cali02225741ce9: Gained carrier Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:42.678 [INFO][4625] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:42.745 [INFO][4625] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0 coredns-7db6d8ff4d- kube-system 64991b07-942c-4246-a46f-4589ee4a9827 767 0 2025-01-29 11:14:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-r4x6g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02225741ce9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:42.746 [INFO][4625] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:42.981 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" HandleID="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Workload="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.002 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" HandleID="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Workload="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000387430), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-r4x6g", "timestamp":"2025-01-29 11:14:42.981092924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.002 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.002 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.002 [INFO][4724] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.008 [INFO][4724] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.020 [INFO][4724] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.030 [INFO][4724] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.031 [INFO][4724] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.033 [INFO][4724] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.034 [INFO][4724] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.035 [INFO][4724] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13 Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.047 [INFO][4724] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.064 [INFO][4724] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.064 [INFO][4724] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" host="localhost" Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.064 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:14:43.101435 containerd[1449]: 2025-01-29 11:14:43.064 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" HandleID="k8s-pod-network.329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Workload="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.102173 containerd[1449]: 2025-01-29 11:14:43.067 [INFO][4625] cni-plugin/k8s.go 386: Populated endpoint ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"64991b07-942c-4246-a46f-4589ee4a9827", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-r4x6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02225741ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.102173 containerd[1449]: 2025-01-29 11:14:43.067 [INFO][4625] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.102173 containerd[1449]: 2025-01-29 11:14:43.067 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02225741ce9 ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.102173 containerd[1449]: 2025-01-29 11:14:43.084 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.102173 containerd[1449]: 2025-01-29 11:14:43.084 [INFO][4625] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"64991b07-942c-4246-a46f-4589ee4a9827", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13", Pod:"coredns-7db6d8ff4d-r4x6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02225741ce9", MAC:"1e:7c:98:2e:80:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.102173 containerd[1449]: 2025-01-29 11:14:43.096 [INFO][4625] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r4x6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r4x6g-eth0" Jan 29 11:14:43.108701 systemd-networkd[1385]: cali6d3a30049a8: Link UP Jan 29 11:14:43.108885 systemd-networkd[1385]: cali6d3a30049a8: Gained carrier Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:42.683 [INFO][4639] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:42.747 [INFO][4639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0 calico-apiserver-5fc6dd774d- calico-apiserver 5db914b0-6a91-420a-9300-e102983010e9 772 0 2025-01-29 11:14:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fc6dd774d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fc6dd774d-6k2vx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d3a30049a8 [] []}} ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:42.748 [INFO][4639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:42.989 [INFO][4729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" HandleID="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Workload="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.010 [INFO][4729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" HandleID="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Workload="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400023e740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fc6dd774d-6k2vx", "timestamp":"2025-01-29 11:14:42.989113076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.010 [INFO][4729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.064 [INFO][4729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.064 [INFO][4729] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.066 [INFO][4729] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.074 [INFO][4729] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.081 [INFO][4729] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.083 [INFO][4729] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.086 [INFO][4729] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.086 [INFO][4729] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.090 [INFO][4729] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7 Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.096 [INFO][4729] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.102 [INFO][4729] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.103 [INFO][4729] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" host="localhost" Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.103 [INFO][4729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:14:43.124079 containerd[1449]: 2025-01-29 11:14:43.103 [INFO][4729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" HandleID="k8s-pod-network.19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Workload="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.124665 containerd[1449]: 2025-01-29 11:14:43.105 [INFO][4639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0", GenerateName:"calico-apiserver-5fc6dd774d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5db914b0-6a91-420a-9300-e102983010e9", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc6dd774d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fc6dd774d-6k2vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d3a30049a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.124665 containerd[1449]: 2025-01-29 11:14:43.106 [INFO][4639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.124665 containerd[1449]: 2025-01-29 11:14:43.106 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d3a30049a8 ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.124665 containerd[1449]: 2025-01-29 11:14:43.107 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.124665 containerd[1449]: 2025-01-29 11:14:43.108 [INFO][4639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0", GenerateName:"calico-apiserver-5fc6dd774d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5db914b0-6a91-420a-9300-e102983010e9", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc6dd774d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7", Pod:"calico-apiserver-5fc6dd774d-6k2vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d3a30049a8", MAC:"1e:35:96:ae:bc:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.124665 containerd[1449]: 2025-01-29 11:14:43.117 [INFO][4639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-6k2vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--6k2vx-eth0" Jan 29 11:14:43.129560 containerd[1449]: time="2025-01-29T11:14:43.129298341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:43.129560 containerd[1449]: time="2025-01-29T11:14:43.129369585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:43.129560 containerd[1449]: time="2025-01-29T11:14:43.129417069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.130154 containerd[1449]: time="2025-01-29T11:14:43.130013668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.150347 systemd-networkd[1385]: calied411289000: Link UP Jan 29 11:14:43.151451 systemd-networkd[1385]: calied411289000: Gained carrier Jan 29 11:14:43.158887 containerd[1449]: time="2025-01-29T11:14:43.158779469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:43.158887 containerd[1449]: time="2025-01-29T11:14:43.158853634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:43.158887 containerd[1449]: time="2025-01-29T11:14:43.158868475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.159156 containerd[1449]: time="2025-01-29T11:14:43.159009325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.160828 systemd[1]: Started cri-containerd-329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13.scope - libcontainer container 329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13. Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:42.639 [INFO][4605] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:42.743 [INFO][4605] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qjm8h-eth0 csi-node-driver- calico-system 51246db2-a0a0-40ce-bf4c-e10522a304db 623 0 2025-01-29 11:14:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qjm8h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calied411289000 [] []}} ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:42.744 [INFO][4605] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:42.981 [INFO][4708] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" HandleID="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Workload="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.011 [INFO][4708] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" HandleID="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Workload="localhost-k8s-csi--node--driver--qjm8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003735b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qjm8h", "timestamp":"2025-01-29 11:14:42.98147179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.011 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.103 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.103 [INFO][4708] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.105 [INFO][4708] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.111 [INFO][4708] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.118 [INFO][4708] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.120 [INFO][4708] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.124 [INFO][4708] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.124 [INFO][4708] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.126 [INFO][4708] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.131 [INFO][4708] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.141 [INFO][4708] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.141 [INFO][4708] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" host="localhost" Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.141 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:14:43.181220 containerd[1449]: 2025-01-29 11:14:43.141 [INFO][4708] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" HandleID="k8s-pod-network.8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Workload="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.181970 containerd[1449]: 2025-01-29 11:14:43.146 [INFO][4605] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qjm8h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51246db2-a0a0-40ce-bf4c-e10522a304db", ResourceVersion:"623", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qjm8h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied411289000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.181970 containerd[1449]: 2025-01-29 11:14:43.146 [INFO][4605] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.181970 containerd[1449]: 2025-01-29 11:14:43.146 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied411289000 ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.181970 containerd[1449]: 2025-01-29 11:14:43.152 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.181970 containerd[1449]: 2025-01-29 11:14:43.153 [INFO][4605] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qjm8h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51246db2-a0a0-40ce-bf4c-e10522a304db", ResourceVersion:"623", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe", Pod:"csi-node-driver-qjm8h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied411289000", MAC:"ca:f1:d8:a6:12:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.181970 containerd[1449]: 2025-01-29 11:14:43.176 [INFO][4605] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe" Namespace="calico-system" Pod="csi-node-driver-qjm8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjm8h-eth0" Jan 29 11:14:43.182881 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:43.186693 systemd[1]: Started cri-containerd-19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7.scope - libcontainer container 19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7. Jan 29 11:14:43.201690 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:43.207855 containerd[1449]: time="2025-01-29T11:14:43.207718417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r4x6g,Uid:64991b07-942c-4246-a46f-4589ee4a9827,Namespace:kube-system,Attempt:5,} returns sandbox id \"329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13\"" Jan 29 11:14:43.208602 kubelet[2617]: E0129 11:14:43.208512 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:43.212199 containerd[1449]: time="2025-01-29T11:14:43.211929738Z" level=info msg="CreateContainer within sandbox \"329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:14:43.215341 systemd-networkd[1385]: cali82b1f2e9f53: Link UP Jan 29 11:14:43.216486 containerd[1449]: time="2025-01-29T11:14:43.215239039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:43.216486 containerd[1449]: time="2025-01-29T11:14:43.215391249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:43.216486 containerd[1449]: time="2025-01-29T11:14:43.215415011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.216282 systemd-networkd[1385]: cali82b1f2e9f53: Gained carrier Jan 29 11:14:43.217137 containerd[1449]: time="2025-01-29T11:14:43.215509217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:42.632 [INFO][4600] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:42.745 [INFO][4600] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0 coredns-7db6d8ff4d- kube-system 06aa249c-a866-428e-8d59-48acbc7fcd5e 776 0 2025-01-29 11:14:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-wlnjj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali82b1f2e9f53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:42.745 [INFO][4600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:42.991 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" HandleID="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Workload="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.016 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" HandleID="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Workload="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ee3c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-wlnjj", "timestamp":"2025-01-29 11:14:42.9917941 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.016 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.142 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.142 [INFO][4710] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.145 [INFO][4710] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.153 [INFO][4710] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.176 [INFO][4710] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.179 [INFO][4710] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.181 [INFO][4710] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.182 [INFO][4710] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.183 [INFO][4710] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2 Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.195 [INFO][4710] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.204 [INFO][4710] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.204 [INFO][4710] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" host="localhost" Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.204 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:14:43.233691 containerd[1449]: 2025-01-29 11:14:43.204 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" HandleID="k8s-pod-network.37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Workload="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.234197 containerd[1449]: 2025-01-29 11:14:43.211 [INFO][4600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"06aa249c-a866-428e-8d59-48acbc7fcd5e", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-wlnjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82b1f2e9f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.234197 containerd[1449]: 2025-01-29 11:14:43.212 [INFO][4600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.234197 containerd[1449]: 2025-01-29 11:14:43.212 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82b1f2e9f53 ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.234197 containerd[1449]: 2025-01-29 11:14:43.216 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.234197 containerd[1449]: 2025-01-29 11:14:43.220 [INFO][4600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"06aa249c-a866-428e-8d59-48acbc7fcd5e", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2", Pod:"coredns-7db6d8ff4d-wlnjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82b1f2e9f53", MAC:"ee:70:8d:a9:64:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.234197 containerd[1449]: 2025-01-29 11:14:43.231 [INFO][4600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlnjj" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wlnjj-eth0" Jan 29 11:14:43.236901 systemd[1]: Started cri-containerd-8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe.scope - libcontainer container 8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe. Jan 29 11:14:43.251805 containerd[1449]: time="2025-01-29T11:14:43.251703554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-6k2vx,Uid:5db914b0-6a91-420a-9300-e102983010e9,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7\"" Jan 29 11:14:43.254753 containerd[1449]: time="2025-01-29T11:14:43.254716435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:14:43.257322 systemd-networkd[1385]: cali412571b8415: Link UP Jan 29 11:14:43.257514 systemd-networkd[1385]: cali412571b8415: Gained carrier Jan 29 11:14:43.260029 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:43.272209 containerd[1449]: time="2025-01-29T11:14:43.272162080Z" level=info msg="CreateContainer within sandbox \"329efd73799d59051ba9832fbabb94a0a3a14c00d59b5ec998731e1f3dc78f13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"572e47abf7693f9352147a70eed946d7a617d55e6da603ac0291b8dd626f75b2\"" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:42.708 [INFO][4653] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:42.744 [INFO][4653] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0 calico-apiserver-5fc6dd774d- calico-apiserver a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7 775 0 2025-01-29 11:14:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fc6dd774d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fc6dd774d-mfrdw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali412571b8415 [] []}} ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:42.744 [INFO][4653] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:42.990 [INFO][4709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" HandleID="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Workload="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.018 [INFO][4709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" HandleID="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Workload="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059c410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fc6dd774d-mfrdw", "timestamp":"2025-01-29 11:14:42.990301797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.019 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.204 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.204 [INFO][4709] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.208 [INFO][4709] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.220 [INFO][4709] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.228 [INFO][4709] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.232 [INFO][4709] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.235 [INFO][4709] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.235 [INFO][4709] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.238 [INFO][4709] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8 Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.242 [INFO][4709] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.251 [INFO][4709] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.251 [INFO][4709] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" host="localhost" Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.251 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:14:43.274104 containerd[1449]: 2025-01-29 11:14:43.251 [INFO][4709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" HandleID="k8s-pod-network.181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Workload="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.274594 containerd[1449]: 2025-01-29 11:14:43.254 [INFO][4653] cni-plugin/k8s.go 386: Populated endpoint ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0", GenerateName:"calico-apiserver-5fc6dd774d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc6dd774d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fc6dd774d-mfrdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali412571b8415", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.274594 containerd[1449]: 2025-01-29 11:14:43.255 [INFO][4653] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.274594 containerd[1449]: 2025-01-29 11:14:43.255 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali412571b8415 ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.274594 containerd[1449]: 2025-01-29 11:14:43.257 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.274594 containerd[1449]: 2025-01-29 11:14:43.257 [INFO][4653] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0", GenerateName:"calico-apiserver-5fc6dd774d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc6dd774d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8", Pod:"calico-apiserver-5fc6dd774d-mfrdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali412571b8415", MAC:"a2:d8:e8:ad:d1:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.274594 containerd[1449]: 2025-01-29 11:14:43.269 [INFO][4653] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8" Namespace="calico-apiserver" Pod="calico-apiserver-5fc6dd774d-mfrdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc6dd774d--mfrdw-eth0" Jan 29 11:14:43.275561 containerd[1449]: time="2025-01-29T11:14:43.275196962Z" level=info msg="StartContainer for \"572e47abf7693f9352147a70eed946d7a617d55e6da603ac0291b8dd626f75b2\"" Jan 29 11:14:43.275780 containerd[1449]: time="2025-01-29T11:14:43.275365574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjm8h,Uid:51246db2-a0a0-40ce-bf4c-e10522a304db,Namespace:calico-system,Attempt:5,} returns sandbox id \"8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe\"" Jan 29 11:14:43.304108 containerd[1449]: time="2025-01-29T11:14:43.300500212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:43.304108 containerd[1449]: time="2025-01-29T11:14:43.300609219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:43.304108 containerd[1449]: time="2025-01-29T11:14:43.300624980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.304108 containerd[1449]: time="2025-01-29T11:14:43.302661756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.309870 systemd-networkd[1385]: calie8f169d0d63: Link UP Jan 29 11:14:43.310648 systemd-networkd[1385]: calie8f169d0d63: Gained carrier Jan 29 11:14:43.320401 containerd[1449]: time="2025-01-29T11:14:43.294560055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:43.320401 containerd[1449]: time="2025-01-29T11:14:43.294609338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:43.320401 containerd[1449]: time="2025-01-29T11:14:43.294650381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.320401 containerd[1449]: time="2025-01-29T11:14:43.294729906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:42.761 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:42.794 [INFO][4663] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0 calico-kube-controllers-67779b498c- calico-system 3ebd6aa9-d128-4a03-9b92-9b846f7c50c7 774 0 2025-01-29 11:14:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67779b498c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67779b498c-2wfqf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8f169d0d63 [] []}} ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:42.794 [INFO][4663] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:42.993 [INFO][4734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" HandleID="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Workload="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.029 [INFO][4734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" HandleID="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Workload="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67779b498c-2wfqf", "timestamp":"2025-01-29 11:14:42.993724793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.030 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.251 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.252 [INFO][4734] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.254 [INFO][4734] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.269 [INFO][4734] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.279 [INFO][4734] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.281 [INFO][4734] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.283 [INFO][4734] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.283 [INFO][4734] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.287 [INFO][4734] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.294 [INFO][4734] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.303 [INFO][4734] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.303 [INFO][4734] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" host="localhost" Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.303 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:14:43.325045 containerd[1449]: 2025-01-29 11:14:43.303 [INFO][4734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" HandleID="k8s-pod-network.a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Workload="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.325554 containerd[1449]: 2025-01-29 11:14:43.306 [INFO][4663] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0", GenerateName:"calico-kube-controllers-67779b498c-", Namespace:"calico-system", SelfLink:"", UID:"3ebd6aa9-d128-4a03-9b92-9b846f7c50c7", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67779b498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67779b498c-2wfqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8f169d0d63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.325554 containerd[1449]: 2025-01-29 11:14:43.306 [INFO][4663] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.325554 containerd[1449]: 2025-01-29 11:14:43.306 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8f169d0d63 ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.325554 containerd[1449]: 2025-01-29 11:14:43.310 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.325554 containerd[1449]: 2025-01-29 11:14:43.311 [INFO][4663] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0", GenerateName:"calico-kube-controllers-67779b498c-", Namespace:"calico-system", SelfLink:"", UID:"3ebd6aa9-d128-4a03-9b92-9b846f7c50c7", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67779b498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee", Pod:"calico-kube-controllers-67779b498c-2wfqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8f169d0d63", MAC:"1a:19:2a:5f:86:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:14:43.325554 containerd[1449]: 2025-01-29 11:14:43.319 [INFO][4663] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee" Namespace="calico-system" Pod="calico-kube-controllers-67779b498c-2wfqf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67779b498c--2wfqf-eth0" Jan 29 11:14:43.337731 systemd[1]: Started cri-containerd-181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8.scope - libcontainer container 181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8. Jan 29 11:14:43.345670 systemd[1]: Started cri-containerd-37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2.scope - libcontainer container 37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2. Jan 29 11:14:43.347270 systemd[1]: Started cri-containerd-572e47abf7693f9352147a70eed946d7a617d55e6da603ac0291b8dd626f75b2.scope - libcontainer container 572e47abf7693f9352147a70eed946d7a617d55e6da603ac0291b8dd626f75b2. Jan 29 11:14:43.357005 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:43.362221 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:43.376101 containerd[1449]: time="2025-01-29T11:14:43.376065337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc6dd774d-mfrdw,Uid:a9b2c67e-8a72-413f-8ca7-b4e54dd85bb7,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8\"" Jan 29 11:14:43.377722 containerd[1449]: time="2025-01-29T11:14:43.377288739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:43.377722 containerd[1449]: time="2025-01-29T11:14:43.377343423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:43.377722 containerd[1449]: time="2025-01-29T11:14:43.377354103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.377722 containerd[1449]: time="2025-01-29T11:14:43.377424828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:43.387379 containerd[1449]: time="2025-01-29T11:14:43.387343490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlnjj,Uid:06aa249c-a866-428e-8d59-48acbc7fcd5e,Namespace:kube-system,Attempt:5,} returns sandbox id \"37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2\"" Jan 29 11:14:43.388741 kubelet[2617]: E0129 11:14:43.388718 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:43.394155 containerd[1449]: time="2025-01-29T11:14:43.394040337Z" level=info msg="CreateContainer within sandbox \"37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:14:43.395507 containerd[1449]: time="2025-01-29T11:14:43.395339584Z" level=info msg="StartContainer for \"572e47abf7693f9352147a70eed946d7a617d55e6da603ac0291b8dd626f75b2\" returns successfully" Jan 29 11:14:43.402730 systemd[1]: Started cri-containerd-a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee.scope - libcontainer container a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee. Jan 29 11:14:43.412993 containerd[1449]: time="2025-01-29T11:14:43.412949400Z" level=info msg="CreateContainer within sandbox \"37cecdfe718fb581b6373bf92ece71ab5579cff9467e33d51c0a702eb89059f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c20d48386dfd8469fb63b3f7c3eb787b4ad3fbea07621d37517a06414b78312\"" Jan 29 11:14:43.414019 containerd[1449]: time="2025-01-29T11:14:43.413631085Z" level=info msg="StartContainer for \"7c20d48386dfd8469fb63b3f7c3eb787b4ad3fbea07621d37517a06414b78312\"" Jan 29 11:14:43.428854 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:43.450436 systemd[1]: Started cri-containerd-7c20d48386dfd8469fb63b3f7c3eb787b4ad3fbea07621d37517a06414b78312.scope - libcontainer container 7c20d48386dfd8469fb63b3f7c3eb787b4ad3fbea07621d37517a06414b78312. Jan 29 11:14:43.461169 containerd[1449]: time="2025-01-29T11:14:43.461128457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67779b498c-2wfqf,Uid:3ebd6aa9-d128-4a03-9b92-9b846f7c50c7,Namespace:calico-system,Attempt:5,} returns sandbox id \"a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee\"" Jan 29 11:14:43.488260 containerd[1449]: time="2025-01-29T11:14:43.488212345Z" level=info msg="StartContainer for \"7c20d48386dfd8469fb63b3f7c3eb787b4ad3fbea07621d37517a06414b78312\" returns successfully" Jan 29 11:14:43.606737 kubelet[2617]: E0129 11:14:43.606629 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:43.628619 kubelet[2617]: I0129 11:14:43.626632 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wlnjj" podStartSLOduration=20.626618306 podStartE2EDuration="20.626618306s" podCreationTimestamp="2025-01-29 11:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:43.626173477 +0000 UTC m=+36.305742265" watchObservedRunningTime="2025-01-29 11:14:43.626618306 +0000 UTC m=+36.306187094" Jan 29 11:14:43.651377 kubelet[2617]: E0129 11:14:43.651338 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:43.664580 kubelet[2617]: E0129 11:14:43.664384 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:43.779798 systemd[1]: run-containerd-runc-k8s.io-19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7-runc.loM318.mount: Deactivated successfully. Jan 29 11:14:43.836771 kernel: bpftool[5303]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:14:43.988568 systemd-networkd[1385]: vxlan.calico: Link UP Jan 29 11:14:43.988575 systemd-networkd[1385]: vxlan.calico: Gained carrier Jan 29 11:14:44.167784 systemd-networkd[1385]: cali6d3a30049a8: Gained IPv6LL Jan 29 11:14:44.294894 systemd-networkd[1385]: cali412571b8415: Gained IPv6LL Jan 29 11:14:44.423145 systemd-networkd[1385]: calied411289000: Gained IPv6LL Jan 29 11:14:44.604456 containerd[1449]: time="2025-01-29T11:14:44.604323042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 11:14:44.607089 containerd[1449]: time="2025-01-29T11:14:44.607054819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.352303102s" Jan 29 11:14:44.607089 containerd[1449]: time="2025-01-29T11:14:44.607092501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:14:44.607744 containerd[1449]: time="2025-01-29T11:14:44.607708981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:44.608606 containerd[1449]: time="2025-01-29T11:14:44.608581558Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:44.609062 containerd[1449]: time="2025-01-29T11:14:44.609040228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:14:44.609338 containerd[1449]: time="2025-01-29T11:14:44.609161876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:44.611813 containerd[1449]: time="2025-01-29T11:14:44.611785526Z" level=info msg="CreateContainer within sandbox \"19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:14:44.624583 containerd[1449]: time="2025-01-29T11:14:44.624528113Z" level=info msg="CreateContainer within sandbox \"19e183af97ef7d41ee00a08f9710c4f00fdcb5b40967316389b48f86d561add7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"36b25f13c58b526c6e04e336a97150e451909c13c5ecf49112d908d460443c85\"" Jan 29 11:14:44.625084 containerd[1449]: time="2025-01-29T11:14:44.625061427Z" level=info msg="StartContainer for \"36b25f13c58b526c6e04e336a97150e451909c13c5ecf49112d908d460443c85\"" Jan 29 11:14:44.660529 kubelet[2617]: E0129 11:14:44.660491 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:44.661798 kubelet[2617]: E0129 11:14:44.661060 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:44.673140 kubelet[2617]: I0129 11:14:44.672644 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r4x6g" podStartSLOduration=21.672627713 podStartE2EDuration="21.672627713s" podCreationTimestamp="2025-01-29 11:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:43.682137973 +0000 UTC m=+36.361706761" watchObservedRunningTime="2025-01-29 11:14:44.672627713 +0000 UTC m=+37.352196501" Jan 29 11:14:44.675833 systemd[1]: Started cri-containerd-36b25f13c58b526c6e04e336a97150e451909c13c5ecf49112d908d460443c85.scope - libcontainer container 36b25f13c58b526c6e04e336a97150e451909c13c5ecf49112d908d460443c85. Jan 29 11:14:44.715052 containerd[1449]: time="2025-01-29T11:14:44.714859892Z" level=info msg="StartContainer for \"36b25f13c58b526c6e04e336a97150e451909c13c5ecf49112d908d460443c85\" returns successfully" Jan 29 11:14:44.744753 systemd-networkd[1385]: cali02225741ce9: Gained IPv6LL Jan 29 11:14:44.807085 systemd-networkd[1385]: calie8f169d0d63: Gained IPv6LL Jan 29 11:14:44.998697 systemd-networkd[1385]: cali82b1f2e9f53: Gained IPv6LL Jan 29 11:14:45.511781 containerd[1449]: time="2025-01-29T11:14:45.511735236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.512847 containerd[1449]: time="2025-01-29T11:14:45.512801944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 11:14:45.513873 containerd[1449]: time="2025-01-29T11:14:45.513840529Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.516103 containerd[1449]: time="2025-01-29T11:14:45.516035028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.516828 containerd[1449]: time="2025-01-29T11:14:45.516710230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 907.637601ms" Jan 29 11:14:45.516828 containerd[1449]: time="2025-01-29T11:14:45.516743312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 11:14:45.520109 containerd[1449]: time="2025-01-29T11:14:45.519189667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:14:45.535483 containerd[1449]: time="2025-01-29T11:14:45.535439132Z" level=info msg="CreateContainer within sandbox \"8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:14:45.550875 containerd[1449]: time="2025-01-29T11:14:45.550826823Z" level=info msg="CreateContainer within sandbox \"8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"99989cc9b625a253838a57efd76b53075ebd64d7093a47af07cd608efe71c503\"" Jan 29 11:14:45.553450 containerd[1449]: time="2025-01-29T11:14:45.551340655Z" level=info msg="StartContainer for \"99989cc9b625a253838a57efd76b53075ebd64d7093a47af07cd608efe71c503\"" Jan 29 11:14:45.574944 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Jan 29 11:14:45.595802 systemd[1]: Started cri-containerd-99989cc9b625a253838a57efd76b53075ebd64d7093a47af07cd608efe71c503.scope - libcontainer container 99989cc9b625a253838a57efd76b53075ebd64d7093a47af07cd608efe71c503. Jan 29 11:14:45.646768 containerd[1449]: time="2025-01-29T11:14:45.646724153Z" level=info msg="StartContainer for \"99989cc9b625a253838a57efd76b53075ebd64d7093a47af07cd608efe71c503\" returns successfully" Jan 29 11:14:45.679768 kubelet[2617]: E0129 11:14:45.678696 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:45.679768 kubelet[2617]: E0129 11:14:45.678814 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:45.691294 kubelet[2617]: I0129 11:14:45.691230 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fc6dd774d-6k2vx" podStartSLOduration=14.336621188 podStartE2EDuration="15.691212439s" podCreationTimestamp="2025-01-29 11:14:30 +0000 UTC" firstStartedPulling="2025-01-29 11:14:43.254266365 +0000 UTC m=+35.933835153" lastFinishedPulling="2025-01-29 11:14:44.608857616 +0000 UTC m=+37.288426404" observedRunningTime="2025-01-29 11:14:45.690053446 +0000 UTC m=+38.369622234" watchObservedRunningTime="2025-01-29 11:14:45.691212439 +0000 UTC m=+38.370781227" Jan 29 11:14:45.798607 containerd[1449]: time="2025-01-29T11:14:45.798452085Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.803302 containerd[1449]: time="2025-01-29T11:14:45.803240987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:14:45.805234 containerd[1449]: time="2025-01-29T11:14:45.805192150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 285.958081ms" Jan 29 11:14:45.805300 containerd[1449]: time="2025-01-29T11:14:45.805252194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:14:45.806630 containerd[1449]: time="2025-01-29T11:14:45.806593839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:14:45.809465 containerd[1449]: time="2025-01-29T11:14:45.809110437Z" level=info msg="CreateContainer within sandbox \"181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:14:45.825163 containerd[1449]: time="2025-01-29T11:14:45.825118767Z" level=info msg="CreateContainer within sandbox \"181bb70759b95580e4d41d488da7e04b1b4498180c95d32f88051090256b3ef8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba116f6140673f8958add716739c43bad42559d55dafb18f36e380447d925707\"" Jan 29 11:14:45.825957 containerd[1449]: time="2025-01-29T11:14:45.825843693Z" level=info msg="StartContainer for \"ba116f6140673f8958add716739c43bad42559d55dafb18f36e380447d925707\"" Jan 29 11:14:45.856791 systemd[1]: Started cri-containerd-ba116f6140673f8958add716739c43bad42559d55dafb18f36e380447d925707.scope - libcontainer container ba116f6140673f8958add716739c43bad42559d55dafb18f36e380447d925707. Jan 29 11:14:45.892933 containerd[1449]: time="2025-01-29T11:14:45.892883803Z" level=info msg="StartContainer for \"ba116f6140673f8958add716739c43bad42559d55dafb18f36e380447d925707\" returns successfully" Jan 29 11:14:46.133579 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:45394.service - OpenSSH per-connection server daemon (10.0.0.1:45394). Jan 29 11:14:46.202834 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 45394 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:46.204491 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:46.208923 systemd-logind[1426]: New session 10 of user core. Jan 29 11:14:46.216771 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:14:46.425341 sshd[5515]: Connection closed by 10.0.0.1 port 45394 Jan 29 11:14:46.426134 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:46.435267 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:45394.service: Deactivated successfully. Jan 29 11:14:46.437056 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:14:46.440217 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:14:46.448421 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:45398.service - OpenSSH per-connection server daemon (10.0.0.1:45398). Jan 29 11:14:46.450588 systemd-logind[1426]: Removed session 10. Jan 29 11:14:46.489034 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 45398 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:46.491444 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:46.501231 systemd-logind[1426]: New session 11 of user core. Jan 29 11:14:46.505742 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:14:46.686179 kubelet[2617]: E0129 11:14:46.686064 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:46.688043 kubelet[2617]: I0129 11:14:46.688018 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:14:46.933944 sshd[5534]: Connection closed by 10.0.0.1 port 45398 Jan 29 11:14:46.934411 sshd-session[5532]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:46.946734 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:45398.service: Deactivated successfully. Jan 29 11:14:46.952400 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:14:46.959035 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:14:46.976412 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:45402.service - OpenSSH per-connection server daemon (10.0.0.1:45402). Jan 29 11:14:46.978040 systemd-logind[1426]: Removed session 11. Jan 29 11:14:47.053322 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 45402 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:47.054863 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:47.064823 systemd-logind[1426]: New session 12 of user core. Jan 29 11:14:47.070745 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:14:47.346855 sshd[5557]: Connection closed by 10.0.0.1 port 45402 Jan 29 11:14:47.347366 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:47.350999 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:45402.service: Deactivated successfully. Jan 29 11:14:47.353311 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:14:47.354619 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:14:47.355621 systemd-logind[1426]: Removed session 12. Jan 29 11:14:47.466387 containerd[1449]: time="2025-01-29T11:14:47.466334976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:47.467575 containerd[1449]: time="2025-01-29T11:14:47.467222309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 11:14:47.468788 containerd[1449]: time="2025-01-29T11:14:47.468749960Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:47.472363 containerd[1449]: time="2025-01-29T11:14:47.472304733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:47.475075 containerd[1449]: time="2025-01-29T11:14:47.475022216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.668387134s" Jan 29 11:14:47.475075 containerd[1449]: time="2025-01-29T11:14:47.475067458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 11:14:47.477339 containerd[1449]: time="2025-01-29T11:14:47.477299192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:14:47.487241 containerd[1449]: time="2025-01-29T11:14:47.487199065Z" level=info msg="CreateContainer within sandbox \"a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:14:47.503009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116177717.mount: Deactivated successfully. Jan 29 11:14:47.504994 containerd[1449]: time="2025-01-29T11:14:47.504952887Z" level=info msg="CreateContainer within sandbox \"a76a2889656eb1fdbc499888d4829880912386c78f8e049c748c9dbca547c6ee\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b37439bb3d9de47bc232e8073a09814f4a1aa442d07e4f7deb1b15fa1ffea0fa\"" Jan 29 11:14:47.507871 containerd[1449]: time="2025-01-29T11:14:47.507821019Z" level=info msg="StartContainer for \"b37439bb3d9de47bc232e8073a09814f4a1aa442d07e4f7deb1b15fa1ffea0fa\"" Jan 29 11:14:47.537721 systemd[1]: Started cri-containerd-b37439bb3d9de47bc232e8073a09814f4a1aa442d07e4f7deb1b15fa1ffea0fa.scope - libcontainer container b37439bb3d9de47bc232e8073a09814f4a1aa442d07e4f7deb1b15fa1ffea0fa. Jan 29 11:14:47.587644 containerd[1449]: time="2025-01-29T11:14:47.587477947Z" level=info msg="StartContainer for \"b37439bb3d9de47bc232e8073a09814f4a1aa442d07e4f7deb1b15fa1ffea0fa\" returns successfully" Jan 29 11:14:47.590653 kubelet[2617]: E0129 11:14:47.589059 2617 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ebd6aa9_d128_4a03_9b92_9b846f7c50c7.slice/cri-containerd-b37439bb3d9de47bc232e8073a09814f4a1aa442d07e4f7deb1b15fa1ffea0fa.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:14:47.698053 kubelet[2617]: I0129 11:14:47.697954 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:14:47.722709 kubelet[2617]: I0129 11:14:47.722622 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fc6dd774d-mfrdw" podStartSLOduration=16.294417287999998 podStartE2EDuration="18.722319137s" podCreationTimestamp="2025-01-29 11:14:29 +0000 UTC" firstStartedPulling="2025-01-29 11:14:43.378088672 +0000 UTC m=+36.057657420" lastFinishedPulling="2025-01-29 11:14:45.805990481 +0000 UTC m=+38.485559269" observedRunningTime="2025-01-29 11:14:46.705740429 +0000 UTC m=+39.385309217" watchObservedRunningTime="2025-01-29 11:14:47.722319137 +0000 UTC m=+40.401887925" Jan 29 11:14:47.723584 kubelet[2617]: I0129 11:14:47.723523 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67779b498c-2wfqf" podStartSLOduration=14.708963818 podStartE2EDuration="18.723512369s" podCreationTimestamp="2025-01-29 11:14:29 +0000 UTC" firstStartedPulling="2025-01-29 11:14:43.462258732 +0000 UTC m=+36.141827520" lastFinishedPulling="2025-01-29 11:14:47.476807283 +0000 UTC m=+40.156376071" observedRunningTime="2025-01-29 11:14:47.723265314 +0000 UTC m=+40.402834102" watchObservedRunningTime="2025-01-29 11:14:47.723512369 +0000 UTC m=+40.403081157" Jan 29 11:14:48.437666 containerd[1449]: time="2025-01-29T11:14:48.437620070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:48.438682 containerd[1449]: time="2025-01-29T11:14:48.438638770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 11:14:48.440108 containerd[1449]: time="2025-01-29T11:14:48.440068293Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:48.444260 containerd[1449]: time="2025-01-29T11:14:48.442694647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:48.444260 containerd[1449]: time="2025-01-29T11:14:48.443609980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 966.272105ms" Jan 29 11:14:48.444260 containerd[1449]: time="2025-01-29T11:14:48.443655143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 11:14:48.448323 containerd[1449]: time="2025-01-29T11:14:48.448278653Z" level=info msg="CreateContainer within sandbox \"8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:14:48.465213 containerd[1449]: time="2025-01-29T11:14:48.465160398Z" level=info msg="CreateContainer within sandbox \"8e5154ee076dae32482db8b1cee8d4ee069df7236d8cbfc5e34ca2e4b86a1bfe\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ecdb6656613e3ef2256cf7ad84bd8b1fcc600a927ce1ae58e0806084f5e3ddc6\"" Jan 29 11:14:48.466376 containerd[1449]: time="2025-01-29T11:14:48.466322106Z" level=info msg="StartContainer for \"ecdb6656613e3ef2256cf7ad84bd8b1fcc600a927ce1ae58e0806084f5e3ddc6\"" Jan 29 11:14:48.533770 systemd[1]: Started cri-containerd-ecdb6656613e3ef2256cf7ad84bd8b1fcc600a927ce1ae58e0806084f5e3ddc6.scope - libcontainer container ecdb6656613e3ef2256cf7ad84bd8b1fcc600a927ce1ae58e0806084f5e3ddc6. Jan 29 11:14:48.572143 containerd[1449]: time="2025-01-29T11:14:48.572001476Z" level=info msg="StartContainer for \"ecdb6656613e3ef2256cf7ad84bd8b1fcc600a927ce1ae58e0806084f5e3ddc6\" returns successfully" Jan 29 11:14:48.715985 kubelet[2617]: I0129 11:14:48.715673 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qjm8h" podStartSLOduration=14.549797744 podStartE2EDuration="19.715654624s" podCreationTimestamp="2025-01-29 11:14:29 +0000 UTC" firstStartedPulling="2025-01-29 11:14:43.278580868 +0000 UTC m=+35.958149656" lastFinishedPulling="2025-01-29 11:14:48.444437748 +0000 UTC m=+41.124006536" observedRunningTime="2025-01-29 11:14:48.71525568 +0000 UTC m=+41.394824468" watchObservedRunningTime="2025-01-29 11:14:48.715654624 +0000 UTC m=+41.395223412" Jan 29 11:14:49.481291 kubelet[2617]: I0129 11:14:49.481225 2617 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:14:49.493434 kubelet[2617]: I0129 11:14:49.493396 2617 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:14:52.359674 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Jan 29 11:14:52.421142 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:52.421818 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:52.425845 systemd-logind[1426]: New session 13 of user core. Jan 29 11:14:52.433734 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:14:52.663065 sshd[5688]: Connection closed by 10.0.0.1 port 45414 Jan 29 11:14:52.663110 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:52.667930 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:45414.service: Deactivated successfully. Jan 29 11:14:52.669818 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:14:52.672266 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:14:52.673806 systemd-logind[1426]: Removed session 13. Jan 29 11:14:57.674518 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:36114.service - OpenSSH per-connection server daemon (10.0.0.1:36114). Jan 29 11:14:57.720082 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:57.722849 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:57.728219 systemd-logind[1426]: New session 14 of user core. Jan 29 11:14:57.734728 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:14:57.884090 sshd[5715]: Connection closed by 10.0.0.1 port 36114 Jan 29 11:14:57.885001 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:57.894270 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:36114.service: Deactivated successfully. Jan 29 11:14:57.896051 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:14:57.897570 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:14:57.907866 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:36118.service - OpenSSH per-connection server daemon (10.0.0.1:36118). Jan 29 11:14:57.912684 systemd-logind[1426]: Removed session 14. Jan 29 11:14:57.952322 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 36118 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:57.953528 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:57.959287 systemd-logind[1426]: New session 15 of user core. Jan 29 11:14:57.965774 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:14:58.201015 sshd[5729]: Connection closed by 10.0.0.1 port 36118 Jan 29 11:14:58.201507 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:58.211174 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:36118.service: Deactivated successfully. Jan 29 11:14:58.213767 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:14:58.215418 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:14:58.224423 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:36130.service - OpenSSH per-connection server daemon (10.0.0.1:36130). Jan 29 11:14:58.225680 systemd-logind[1426]: Removed session 15. Jan 29 11:14:58.265988 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 36130 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:58.267752 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:58.271700 systemd-logind[1426]: New session 16 of user core. Jan 29 11:14:58.281748 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:14:59.795192 sshd[5742]: Connection closed by 10.0.0.1 port 36130 Jan 29 11:14:59.797558 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:59.807084 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:36130.service: Deactivated successfully. Jan 29 11:14:59.809049 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:14:59.811144 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:14:59.817046 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:36132.service - OpenSSH per-connection server daemon (10.0.0.1:36132). Jan 29 11:14:59.823891 systemd-logind[1426]: Removed session 16. Jan 29 11:14:59.862159 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 36132 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:59.863374 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:59.867878 systemd-logind[1426]: New session 17 of user core. Jan 29 11:14:59.875694 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:15:00.255117 sshd[5781]: Connection closed by 10.0.0.1 port 36132 Jan 29 11:15:00.255338 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:00.263483 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:36132.service: Deactivated successfully. Jan 29 11:15:00.267767 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:15:00.269332 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:15:00.280802 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:36146.service - OpenSSH per-connection server daemon (10.0.0.1:36146). Jan 29 11:15:00.281259 systemd-logind[1426]: Removed session 17. Jan 29 11:15:00.327508 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 36146 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:00.328747 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:00.332627 systemd-logind[1426]: New session 18 of user core. Jan 29 11:15:00.344674 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:15:00.466598 sshd[5794]: Connection closed by 10.0.0.1 port 36146 Jan 29 11:15:00.466928 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:00.470594 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:36146.service: Deactivated successfully. Jan 29 11:15:00.472383 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:15:00.474056 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:15:00.474859 systemd-logind[1426]: Removed session 18. Jan 29 11:15:05.479245 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:54048.service - OpenSSH per-connection server daemon (10.0.0.1:54048). Jan 29 11:15:05.519710 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 54048 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:05.520908 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:05.524299 systemd-logind[1426]: New session 19 of user core. Jan 29 11:15:05.530730 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:15:05.661351 sshd[5821]: Connection closed by 10.0.0.1 port 54048 Jan 29 11:15:05.661714 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:05.665237 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:54048.service: Deactivated successfully. Jan 29 11:15:05.666971 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:15:05.668695 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:15:05.669480 systemd-logind[1426]: Removed session 19. Jan 29 11:15:07.399871 containerd[1449]: time="2025-01-29T11:15:07.399833574Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:15:07.400219 containerd[1449]: time="2025-01-29T11:15:07.399952179Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:15:07.400219 containerd[1449]: time="2025-01-29T11:15:07.399963619Z" level=info msg="StopPodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:15:07.406470 containerd[1449]: time="2025-01-29T11:15:07.406430216Z" level=info msg="RemovePodSandbox for \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:15:07.406547 containerd[1449]: time="2025-01-29T11:15:07.406476658Z" level=info msg="Forcibly stopping sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\"" Jan 29 11:15:07.406573 containerd[1449]: time="2025-01-29T11:15:07.406562542Z" level=info msg="TearDown network for sandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" successfully" Jan 29 11:15:07.422661 containerd[1449]: time="2025-01-29T11:15:07.422624510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.422720 containerd[1449]: time="2025-01-29T11:15:07.422685593Z" level=info msg="RemovePodSandbox \"bf528367f11e43a8e34a736b2780bebb1c634a29003405e7a0f641d483ae9382\" returns successfully" Jan 29 11:15:07.423163 containerd[1449]: time="2025-01-29T11:15:07.423137052Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:15:07.423238 containerd[1449]: time="2025-01-29T11:15:07.423222536Z" level=info msg="TearDown network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" successfully" Jan 29 11:15:07.423303 containerd[1449]: time="2025-01-29T11:15:07.423237936Z" level=info msg="StopPodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" returns successfully" Jan 29 11:15:07.423578 containerd[1449]: time="2025-01-29T11:15:07.423547310Z" level=info msg="RemovePodSandbox for \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:15:07.423618 containerd[1449]: time="2025-01-29T11:15:07.423579151Z" level=info msg="Forcibly stopping sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\"" Jan 29 11:15:07.423739 containerd[1449]: time="2025-01-29T11:15:07.423643194Z" level=info msg="TearDown network for sandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" successfully" Jan 29 11:15:07.426214 containerd[1449]: time="2025-01-29T11:15:07.426175342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.426269 containerd[1449]: time="2025-01-29T11:15:07.426219904Z" level=info msg="RemovePodSandbox \"709dfe75bed102bc4bda274d2141d84a8fc2f99b81bc41e7317008c2be69be03\" returns successfully" Jan 29 11:15:07.426619 containerd[1449]: time="2025-01-29T11:15:07.426496236Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" Jan 29 11:15:07.426619 containerd[1449]: time="2025-01-29T11:15:07.426599480Z" level=info msg="TearDown network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" successfully" Jan 29 11:15:07.426619 containerd[1449]: time="2025-01-29T11:15:07.426610041Z" level=info msg="StopPodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" returns successfully" Jan 29 11:15:07.426955 containerd[1449]: time="2025-01-29T11:15:07.426842971Z" level=info msg="RemovePodSandbox for \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" Jan 29 11:15:07.426955 containerd[1449]: time="2025-01-29T11:15:07.426870132Z" level=info msg="Forcibly stopping sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\"" Jan 29 11:15:07.426955 containerd[1449]: time="2025-01-29T11:15:07.426934695Z" level=info msg="TearDown network for sandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" successfully" Jan 29 11:15:07.429205 containerd[1449]: time="2025-01-29T11:15:07.429167990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.429276 containerd[1449]: time="2025-01-29T11:15:07.429232633Z" level=info msg="RemovePodSandbox \"ad548504ddda8b40acb97632eb7592b09255aa49606065b613491eb735ad78c2\" returns successfully" Jan 29 11:15:07.429592 containerd[1449]: time="2025-01-29T11:15:07.429567407Z" level=info msg="StopPodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\"" Jan 29 11:15:07.429663 containerd[1449]: time="2025-01-29T11:15:07.429649091Z" level=info msg="TearDown network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" successfully" Jan 29 11:15:07.429663 containerd[1449]: time="2025-01-29T11:15:07.429661932Z" level=info msg="StopPodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" returns successfully" Jan 29 11:15:07.429949 containerd[1449]: time="2025-01-29T11:15:07.429929023Z" level=info msg="RemovePodSandbox for \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\"" Jan 29 11:15:07.429979 containerd[1449]: time="2025-01-29T11:15:07.429953344Z" level=info msg="Forcibly stopping sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\"" Jan 29 11:15:07.430066 containerd[1449]: time="2025-01-29T11:15:07.430012627Z" level=info msg="TearDown network for sandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" successfully" Jan 29 11:15:07.432473 containerd[1449]: time="2025-01-29T11:15:07.432434850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.432532 containerd[1449]: time="2025-01-29T11:15:07.432488133Z" level=info msg="RemovePodSandbox \"29eb2c31f637daa93f3b9c099da3435f32bd1682f958a61932c8bbc27a1e5839\" returns successfully" Jan 29 11:15:07.432816 containerd[1449]: time="2025-01-29T11:15:07.432793466Z" level=info msg="StopPodSandbox for \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\"" Jan 29 11:15:07.432899 containerd[1449]: time="2025-01-29T11:15:07.432883149Z" level=info msg="TearDown network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\" successfully" Jan 29 11:15:07.432926 containerd[1449]: time="2025-01-29T11:15:07.432905710Z" level=info msg="StopPodSandbox for \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\" returns successfully" Jan 29 11:15:07.433195 containerd[1449]: time="2025-01-29T11:15:07.433173042Z" level=info msg="RemovePodSandbox for \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\"" Jan 29 11:15:07.433227 containerd[1449]: time="2025-01-29T11:15:07.433200523Z" level=info msg="Forcibly stopping sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\"" Jan 29 11:15:07.433279 containerd[1449]: time="2025-01-29T11:15:07.433265566Z" level=info msg="TearDown network for sandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\" successfully" Jan 29 11:15:07.435612 containerd[1449]: time="2025-01-29T11:15:07.435574905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.435667 containerd[1449]: time="2025-01-29T11:15:07.435626947Z" level=info msg="RemovePodSandbox \"7e625f167820e753a8cea9ea29b34e80f731655a3c21ddd16bb3b88a0aec7252\" returns successfully" Jan 29 11:15:07.436086 containerd[1449]: time="2025-01-29T11:15:07.435934920Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:15:07.436086 containerd[1449]: time="2025-01-29T11:15:07.436026924Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:15:07.436086 containerd[1449]: time="2025-01-29T11:15:07.436037645Z" level=info msg="StopPodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:15:07.436306 containerd[1449]: time="2025-01-29T11:15:07.436280255Z" level=info msg="RemovePodSandbox for \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:15:07.436350 containerd[1449]: time="2025-01-29T11:15:07.436306536Z" level=info msg="Forcibly stopping sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\"" Jan 29 11:15:07.436410 containerd[1449]: time="2025-01-29T11:15:07.436392580Z" level=info msg="TearDown network for sandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" successfully" Jan 29 11:15:07.438831 containerd[1449]: time="2025-01-29T11:15:07.438798923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.438997 containerd[1449]: time="2025-01-29T11:15:07.438852925Z" level=info msg="RemovePodSandbox \"e4d9cba9cae390fa77cab383fb93a93213fcbd0c82d7a884cfe0524995af34f9\" returns successfully" Jan 29 11:15:07.439368 containerd[1449]: time="2025-01-29T11:15:07.439335546Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:15:07.439446 containerd[1449]: time="2025-01-29T11:15:07.439425510Z" level=info msg="TearDown network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" successfully" Jan 29 11:15:07.439474 containerd[1449]: time="2025-01-29T11:15:07.439446031Z" level=info msg="StopPodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" returns successfully" Jan 29 11:15:07.439698 containerd[1449]: time="2025-01-29T11:15:07.439663960Z" level=info msg="RemovePodSandbox for \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:15:07.439734 containerd[1449]: time="2025-01-29T11:15:07.439705482Z" level=info msg="Forcibly stopping sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\"" Jan 29 11:15:07.439784 containerd[1449]: time="2025-01-29T11:15:07.439770004Z" level=info msg="TearDown network for sandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" successfully" Jan 29 11:15:07.442577 containerd[1449]: time="2025-01-29T11:15:07.442550684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.442641 containerd[1449]: time="2025-01-29T11:15:07.442595525Z" level=info msg="RemovePodSandbox \"8c9ee7e3e827cdf6d889cc3558fb83c7e48651b74ad251d0cbfcbbfa3bf71583\" returns successfully" Jan 29 11:15:07.442946 containerd[1449]: time="2025-01-29T11:15:07.442919419Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" Jan 29 11:15:07.443243 containerd[1449]: time="2025-01-29T11:15:07.443207952Z" level=info msg="TearDown network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" successfully" Jan 29 11:15:07.443243 containerd[1449]: time="2025-01-29T11:15:07.443229953Z" level=info msg="StopPodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" returns successfully" Jan 29 11:15:07.445754 containerd[1449]: time="2025-01-29T11:15:07.445719459Z" level=info msg="RemovePodSandbox for \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" Jan 29 11:15:07.445803 containerd[1449]: time="2025-01-29T11:15:07.445751701Z" level=info msg="Forcibly stopping sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\"" Jan 29 11:15:07.445868 containerd[1449]: time="2025-01-29T11:15:07.445833384Z" level=info msg="TearDown network for sandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" successfully" Jan 29 11:15:07.449750 containerd[1449]: time="2025-01-29T11:15:07.449709670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.449825 containerd[1449]: time="2025-01-29T11:15:07.449757872Z" level=info msg="RemovePodSandbox \"869dd61f56c9c864f017a01430bde1b7542d37fa2e70e226e56dfa183346eb16\" returns successfully" Jan 29 11:15:07.450011 containerd[1449]: time="2025-01-29T11:15:07.449993162Z" level=info msg="StopPodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\"" Jan 29 11:15:07.450081 containerd[1449]: time="2025-01-29T11:15:07.450067405Z" level=info msg="TearDown network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" successfully" Jan 29 11:15:07.450110 containerd[1449]: time="2025-01-29T11:15:07.450080366Z" level=info msg="StopPodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" returns successfully" Jan 29 11:15:07.450333 containerd[1449]: time="2025-01-29T11:15:07.450314936Z" level=info msg="RemovePodSandbox for \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\"" Jan 29 11:15:07.450374 containerd[1449]: time="2025-01-29T11:15:07.450361618Z" level=info msg="Forcibly stopping sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\"" Jan 29 11:15:07.450430 containerd[1449]: time="2025-01-29T11:15:07.450417260Z" level=info msg="TearDown network for sandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" successfully" Jan 29 11:15:07.452753 containerd[1449]: time="2025-01-29T11:15:07.452718599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.452786 containerd[1449]: time="2025-01-29T11:15:07.452776442Z" level=info msg="RemovePodSandbox \"d1e731b1d0ee28e61a3f579eacd7a80e80a977a06a8e590532d746f796be7f3c\" returns successfully" Jan 29 11:15:07.453030 containerd[1449]: time="2025-01-29T11:15:07.453004451Z" level=info msg="StopPodSandbox for \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\"" Jan 29 11:15:07.453094 containerd[1449]: time="2025-01-29T11:15:07.453079214Z" level=info msg="TearDown network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\" successfully" Jan 29 11:15:07.453094 containerd[1449]: time="2025-01-29T11:15:07.453092855Z" level=info msg="StopPodSandbox for \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\" returns successfully" Jan 29 11:15:07.453343 containerd[1449]: time="2025-01-29T11:15:07.453297264Z" level=info msg="RemovePodSandbox for \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\"" Jan 29 11:15:07.453390 containerd[1449]: time="2025-01-29T11:15:07.453344306Z" level=info msg="Forcibly stopping sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\"" Jan 29 11:15:07.453441 containerd[1449]: time="2025-01-29T11:15:07.453427069Z" level=info msg="TearDown network for sandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\" successfully" Jan 29 11:15:07.456195 containerd[1449]: time="2025-01-29T11:15:07.456159506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.456262 containerd[1449]: time="2025-01-29T11:15:07.456210709Z" level=info msg="RemovePodSandbox \"322780e0fbbd2a9d67e9833e8f788956b9d6ef23ac86d8bbc0fa008d67ddc3cd\" returns successfully" Jan 29 11:15:07.456470 containerd[1449]: time="2025-01-29T11:15:07.456450199Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:15:07.456540 containerd[1449]: time="2025-01-29T11:15:07.456524242Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:15:07.456575 containerd[1449]: time="2025-01-29T11:15:07.456559124Z" level=info msg="StopPodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:15:07.456792 containerd[1449]: time="2025-01-29T11:15:07.456768453Z" level=info msg="RemovePodSandbox for \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:15:07.456826 containerd[1449]: time="2025-01-29T11:15:07.456791333Z" level=info msg="Forcibly stopping sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\"" Jan 29 11:15:07.456851 containerd[1449]: time="2025-01-29T11:15:07.456839456Z" level=info msg="TearDown network for sandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" successfully" Jan 29 11:15:07.461253 containerd[1449]: time="2025-01-29T11:15:07.461212163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.461305 containerd[1449]: time="2025-01-29T11:15:07.461273725Z" level=info msg="RemovePodSandbox \"a88be772f6464dd260678989650dc1c6a9093a96ca4fec788d5416be1324175f\" returns successfully" Jan 29 11:15:07.461654 containerd[1449]: time="2025-01-29T11:15:07.461632581Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:15:07.461735 containerd[1449]: time="2025-01-29T11:15:07.461720305Z" level=info msg="TearDown network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" successfully" Jan 29 11:15:07.461771 containerd[1449]: time="2025-01-29T11:15:07.461733705Z" level=info msg="StopPodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" returns successfully" Jan 29 11:15:07.462008 containerd[1449]: time="2025-01-29T11:15:07.461985676Z" level=info msg="RemovePodSandbox for \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:15:07.462039 containerd[1449]: time="2025-01-29T11:15:07.462013997Z" level=info msg="Forcibly stopping sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\"" Jan 29 11:15:07.462101 containerd[1449]: time="2025-01-29T11:15:07.462080200Z" level=info msg="TearDown network for sandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" successfully" Jan 29 11:15:07.465067 containerd[1449]: time="2025-01-29T11:15:07.464941843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.465067 containerd[1449]: time="2025-01-29T11:15:07.464990885Z" level=info msg="RemovePodSandbox \"69d7b350ee39c714d798a121b8f1ab910bb23fe2f5eb7eb0551f8dd88ed28381\" returns successfully" Jan 29 11:15:07.465677 containerd[1449]: time="2025-01-29T11:15:07.465647513Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" Jan 29 11:15:07.465878 containerd[1449]: time="2025-01-29T11:15:07.465856162Z" level=info msg="TearDown network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" successfully" Jan 29 11:15:07.465878 containerd[1449]: time="2025-01-29T11:15:07.465874563Z" level=info msg="StopPodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" returns successfully" Jan 29 11:15:07.466185 containerd[1449]: time="2025-01-29T11:15:07.466163695Z" level=info msg="RemovePodSandbox for \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" Jan 29 11:15:07.466185 containerd[1449]: time="2025-01-29T11:15:07.466187336Z" level=info msg="Forcibly stopping sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\"" Jan 29 11:15:07.466261 containerd[1449]: time="2025-01-29T11:15:07.466241698Z" level=info msg="TearDown network for sandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" successfully" Jan 29 11:15:07.468650 containerd[1449]: time="2025-01-29T11:15:07.468580958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.468650 containerd[1449]: time="2025-01-29T11:15:07.468634881Z" level=info msg="RemovePodSandbox \"e6cc089ed74f3242e0fd11edc9f803035e6b49cba574ae4e6f7ff61698522e21\" returns successfully" Jan 29 11:15:07.468989 containerd[1449]: time="2025-01-29T11:15:07.468964815Z" level=info msg="StopPodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\"" Jan 29 11:15:07.469055 containerd[1449]: time="2025-01-29T11:15:07.469040858Z" level=info msg="TearDown network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" successfully" Jan 29 11:15:07.469099 containerd[1449]: time="2025-01-29T11:15:07.469053299Z" level=info msg="StopPodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" returns successfully" Jan 29 11:15:07.469295 containerd[1449]: time="2025-01-29T11:15:07.469271628Z" level=info msg="RemovePodSandbox for \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\"" Jan 29 11:15:07.469295 containerd[1449]: time="2025-01-29T11:15:07.469290949Z" level=info msg="Forcibly stopping sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\"" Jan 29 11:15:07.469345 containerd[1449]: time="2025-01-29T11:15:07.469336391Z" level=info msg="TearDown network for sandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" successfully" Jan 29 11:15:07.471604 containerd[1449]: time="2025-01-29T11:15:07.471550366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.471604 containerd[1449]: time="2025-01-29T11:15:07.471598008Z" level=info msg="RemovePodSandbox \"79baea975388ffda91a1450c631b90901be41b90e1906d4c25b91218d8af214c\" returns successfully" Jan 29 11:15:07.471917 containerd[1449]: time="2025-01-29T11:15:07.471851859Z" level=info msg="StopPodSandbox for \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\"" Jan 29 11:15:07.471967 containerd[1449]: time="2025-01-29T11:15:07.471935862Z" level=info msg="TearDown network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\" successfully" Jan 29 11:15:07.471967 containerd[1449]: time="2025-01-29T11:15:07.471945383Z" level=info msg="StopPodSandbox for \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\" returns successfully" Jan 29 11:15:07.472318 containerd[1449]: time="2025-01-29T11:15:07.472268516Z" level=info msg="RemovePodSandbox for \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\"" Jan 29 11:15:07.473445 containerd[1449]: time="2025-01-29T11:15:07.472398642Z" level=info msg="Forcibly stopping sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\"" Jan 29 11:15:07.473445 containerd[1449]: time="2025-01-29T11:15:07.472465925Z" level=info msg="TearDown network for sandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\" successfully" Jan 29 11:15:07.474826 containerd[1449]: time="2025-01-29T11:15:07.474719181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.474826 containerd[1449]: time="2025-01-29T11:15:07.474763903Z" level=info msg="RemovePodSandbox \"a66e882a3333be99350673d012e867585953f08abd75eb4ad65fcc800fd5c5a9\" returns successfully" Jan 29 11:15:07.475100 containerd[1449]: time="2025-01-29T11:15:07.475046755Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:15:07.475163 containerd[1449]: time="2025-01-29T11:15:07.475126119Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:15:07.475163 containerd[1449]: time="2025-01-29T11:15:07.475138119Z" level=info msg="StopPodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:15:07.476196 containerd[1449]: time="2025-01-29T11:15:07.475552017Z" level=info msg="RemovePodSandbox for \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:15:07.476196 containerd[1449]: time="2025-01-29T11:15:07.475579378Z" level=info msg="Forcibly stopping sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\"" Jan 29 11:15:07.476196 containerd[1449]: time="2025-01-29T11:15:07.475648581Z" level=info msg="TearDown network for sandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" successfully" Jan 29 11:15:07.478362 containerd[1449]: time="2025-01-29T11:15:07.478333656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.478469 containerd[1449]: time="2025-01-29T11:15:07.478454221Z" level=info msg="RemovePodSandbox \"ca18c1a9dfd9666bf96c24bd4812e4df6f06e075f618bf67129b0c9b1d6e00f5\" returns successfully" Jan 29 11:15:07.478864 containerd[1449]: time="2025-01-29T11:15:07.478839398Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:15:07.478941 containerd[1449]: time="2025-01-29T11:15:07.478924001Z" level=info msg="TearDown network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" successfully" Jan 29 11:15:07.478941 containerd[1449]: time="2025-01-29T11:15:07.478938442Z" level=info msg="StopPodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" returns successfully" Jan 29 11:15:07.479227 containerd[1449]: time="2025-01-29T11:15:07.479202613Z" level=info msg="RemovePodSandbox for \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:15:07.479261 containerd[1449]: time="2025-01-29T11:15:07.479231495Z" level=info msg="Forcibly stopping sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\"" Jan 29 11:15:07.479306 containerd[1449]: time="2025-01-29T11:15:07.479292337Z" level=info msg="TearDown network for sandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" successfully" Jan 29 11:15:07.481750 containerd[1449]: time="2025-01-29T11:15:07.481716521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.481790 containerd[1449]: time="2025-01-29T11:15:07.481781644Z" level=info msg="RemovePodSandbox \"65c68354ce3b4746949bb83570ecb5577ba46c12afec4fb9274b1fca94accc3b\" returns successfully" Jan 29 11:15:07.482130 containerd[1449]: time="2025-01-29T11:15:07.482108098Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" Jan 29 11:15:07.482371 containerd[1449]: time="2025-01-29T11:15:07.482255984Z" level=info msg="TearDown network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" successfully" Jan 29 11:15:07.482371 containerd[1449]: time="2025-01-29T11:15:07.482292626Z" level=info msg="StopPodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" returns successfully" Jan 29 11:15:07.482623 containerd[1449]: time="2025-01-29T11:15:07.482590398Z" level=info msg="RemovePodSandbox for \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" Jan 29 11:15:07.482623 containerd[1449]: time="2025-01-29T11:15:07.482619760Z" level=info msg="Forcibly stopping sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\"" Jan 29 11:15:07.482702 containerd[1449]: time="2025-01-29T11:15:07.482689603Z" level=info msg="TearDown network for sandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" successfully" Jan 29 11:15:07.484929 containerd[1449]: time="2025-01-29T11:15:07.484901417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.484993 containerd[1449]: time="2025-01-29T11:15:07.484952140Z" level=info msg="RemovePodSandbox \"56faaac68f1e0dc7c9a6d28c14a0f560388bbedff4d885009dfa7d5e3adb2b8a\" returns successfully" Jan 29 11:15:07.485342 containerd[1449]: time="2025-01-29T11:15:07.485319355Z" level=info msg="StopPodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\"" Jan 29 11:15:07.485588 containerd[1449]: time="2025-01-29T11:15:07.485454441Z" level=info msg="TearDown network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" successfully" Jan 29 11:15:07.485588 containerd[1449]: time="2025-01-29T11:15:07.485467642Z" level=info msg="StopPodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" returns successfully" Jan 29 11:15:07.485767 containerd[1449]: time="2025-01-29T11:15:07.485719653Z" level=info msg="RemovePodSandbox for \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\"" Jan 29 11:15:07.485767 containerd[1449]: time="2025-01-29T11:15:07.485744774Z" level=info msg="Forcibly stopping sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\"" Jan 29 11:15:07.485854 containerd[1449]: time="2025-01-29T11:15:07.485803376Z" level=info msg="TearDown network for sandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" successfully" Jan 29 11:15:07.488051 containerd[1449]: time="2025-01-29T11:15:07.488022831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.488099 containerd[1449]: time="2025-01-29T11:15:07.488075793Z" level=info msg="RemovePodSandbox \"99b997d9b61fc9947a6b5c12bae6400b2519ca5126fd667746029058a3af3749\" returns successfully" Jan 29 11:15:07.488426 containerd[1449]: time="2025-01-29T11:15:07.488396687Z" level=info msg="StopPodSandbox for \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\"" Jan 29 11:15:07.488488 containerd[1449]: time="2025-01-29T11:15:07.488479811Z" level=info msg="TearDown network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\" successfully" Jan 29 11:15:07.488522 containerd[1449]: time="2025-01-29T11:15:07.488489331Z" level=info msg="StopPodSandbox for \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\" returns successfully" Jan 29 11:15:07.489558 containerd[1449]: time="2025-01-29T11:15:07.488828906Z" level=info msg="RemovePodSandbox for \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\"" Jan 29 11:15:07.489558 containerd[1449]: time="2025-01-29T11:15:07.488858587Z" level=info msg="Forcibly stopping sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\"" Jan 29 11:15:07.489558 containerd[1449]: time="2025-01-29T11:15:07.488923470Z" level=info msg="TearDown network for sandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\" successfully" Jan 29 11:15:07.491038 containerd[1449]: time="2025-01-29T11:15:07.491003639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.491084 containerd[1449]: time="2025-01-29T11:15:07.491052201Z" level=info msg="RemovePodSandbox \"950e76354bb43734e80ce9b6bf1efa20970381f2ae4520719b884a5083e68749\" returns successfully" Jan 29 11:15:07.491431 containerd[1449]: time="2025-01-29T11:15:07.491380935Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:15:07.491508 containerd[1449]: time="2025-01-29T11:15:07.491471739Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:15:07.491508 containerd[1449]: time="2025-01-29T11:15:07.491482859Z" level=info msg="StopPodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:15:07.492559 containerd[1449]: time="2025-01-29T11:15:07.491734430Z" level=info msg="RemovePodSandbox for \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:15:07.492559 containerd[1449]: time="2025-01-29T11:15:07.491763151Z" level=info msg="Forcibly stopping sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\"" Jan 29 11:15:07.492559 containerd[1449]: time="2025-01-29T11:15:07.491825994Z" level=info msg="TearDown network for sandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" successfully" Jan 29 11:15:07.494117 containerd[1449]: time="2025-01-29T11:15:07.494082971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.494177 containerd[1449]: time="2025-01-29T11:15:07.494129213Z" level=info msg="RemovePodSandbox \"3f702e3ed71a649d639a6e2a3e7b2ec097b1ab188658f2823f0dcee6d8995871\" returns successfully" Jan 29 11:15:07.494412 containerd[1449]: time="2025-01-29T11:15:07.494372743Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:15:07.494523 containerd[1449]: time="2025-01-29T11:15:07.494455947Z" level=info msg="TearDown network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" successfully" Jan 29 11:15:07.494523 containerd[1449]: time="2025-01-29T11:15:07.494472387Z" level=info msg="StopPodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" returns successfully" Jan 29 11:15:07.496007 containerd[1449]: time="2025-01-29T11:15:07.494927487Z" level=info msg="RemovePodSandbox for \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:15:07.496007 containerd[1449]: time="2025-01-29T11:15:07.494955248Z" level=info msg="Forcibly stopping sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\"" Jan 29 11:15:07.496007 containerd[1449]: time="2025-01-29T11:15:07.495015891Z" level=info msg="TearDown network for sandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" successfully" Jan 29 11:15:07.497199 containerd[1449]: time="2025-01-29T11:15:07.497169303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.497300 containerd[1449]: time="2025-01-29T11:15:07.497285228Z" level=info msg="RemovePodSandbox \"4e91547bba70268c12917f4e096b69ed267d03925e4cf72907158bebe816f35e\" returns successfully" Jan 29 11:15:07.497642 containerd[1449]: time="2025-01-29T11:15:07.497619442Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" Jan 29 11:15:07.497724 containerd[1449]: time="2025-01-29T11:15:07.497707766Z" level=info msg="TearDown network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" successfully" Jan 29 11:15:07.497724 containerd[1449]: time="2025-01-29T11:15:07.497723127Z" level=info msg="StopPodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" returns successfully" Jan 29 11:15:07.497973 containerd[1449]: time="2025-01-29T11:15:07.497948136Z" level=info msg="RemovePodSandbox for \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" Jan 29 11:15:07.498020 containerd[1449]: time="2025-01-29T11:15:07.497978298Z" level=info msg="Forcibly stopping sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\"" Jan 29 11:15:07.498056 containerd[1449]: time="2025-01-29T11:15:07.498038700Z" level=info msg="TearDown network for sandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" successfully" Jan 29 11:15:07.500308 containerd[1449]: time="2025-01-29T11:15:07.500278116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.500361 containerd[1449]: time="2025-01-29T11:15:07.500327278Z" level=info msg="RemovePodSandbox \"f161b34e2ca184d2cef27c94360f76e1da681b247e7bbadafff013e0ae56519d\" returns successfully" Jan 29 11:15:07.500625 containerd[1449]: time="2025-01-29T11:15:07.500600610Z" level=info msg="StopPodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\"" Jan 29 11:15:07.500702 containerd[1449]: time="2025-01-29T11:15:07.500686214Z" level=info msg="TearDown network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" successfully" Jan 29 11:15:07.500746 containerd[1449]: time="2025-01-29T11:15:07.500701054Z" level=info msg="StopPodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" returns successfully" Jan 29 11:15:07.500942 containerd[1449]: time="2025-01-29T11:15:07.500914663Z" level=info msg="RemovePodSandbox for \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\"" Jan 29 11:15:07.500985 containerd[1449]: time="2025-01-29T11:15:07.500945305Z" level=info msg="Forcibly stopping sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\"" Jan 29 11:15:07.501022 containerd[1449]: time="2025-01-29T11:15:07.501007267Z" level=info msg="TearDown network for sandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" successfully" Jan 29 11:15:07.503359 containerd[1449]: time="2025-01-29T11:15:07.503319086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.503412 containerd[1449]: time="2025-01-29T11:15:07.503372649Z" level=info msg="RemovePodSandbox \"97c698e974571dbf51c46bb94d2e4ad49065c6c96bd38cbc50a9b0c32e010350\" returns successfully" Jan 29 11:15:07.503832 containerd[1449]: time="2025-01-29T11:15:07.503667941Z" level=info msg="StopPodSandbox for \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\"" Jan 29 11:15:07.503832 containerd[1449]: time="2025-01-29T11:15:07.503760945Z" level=info msg="TearDown network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\" successfully" Jan 29 11:15:07.503832 containerd[1449]: time="2025-01-29T11:15:07.503771226Z" level=info msg="StopPodSandbox for \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\" returns successfully" Jan 29 11:15:07.504012 containerd[1449]: time="2025-01-29T11:15:07.503963794Z" level=info msg="RemovePodSandbox for \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\"" Jan 29 11:15:07.504012 containerd[1449]: time="2025-01-29T11:15:07.503992595Z" level=info msg="Forcibly stopping sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\"" Jan 29 11:15:07.504066 containerd[1449]: time="2025-01-29T11:15:07.504049518Z" level=info msg="TearDown network for sandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\" successfully" Jan 29 11:15:07.506215 containerd[1449]: time="2025-01-29T11:15:07.506177329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.506287 containerd[1449]: time="2025-01-29T11:15:07.506229291Z" level=info msg="RemovePodSandbox \"05507430999814a7c2e5478dfe67ca3f49a3bd7720934b874840a465069b6847\" returns successfully" Jan 29 11:15:07.513280 containerd[1449]: time="2025-01-29T11:15:07.513254352Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:15:07.513357 containerd[1449]: time="2025-01-29T11:15:07.513341476Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:15:07.513388 containerd[1449]: time="2025-01-29T11:15:07.513356636Z" level=info msg="StopPodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:15:07.514111 containerd[1449]: time="2025-01-29T11:15:07.513963902Z" level=info msg="RemovePodSandbox for \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:15:07.514111 containerd[1449]: time="2025-01-29T11:15:07.513994184Z" level=info msg="Forcibly stopping sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\"" Jan 29 11:15:07.514111 containerd[1449]: time="2025-01-29T11:15:07.514052826Z" level=info msg="TearDown network for sandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" successfully" Jan 29 11:15:07.516248 containerd[1449]: time="2025-01-29T11:15:07.516215199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.516311 containerd[1449]: time="2025-01-29T11:15:07.516264281Z" level=info msg="RemovePodSandbox \"a2ae9cb798b317a86fec5742597dc31e44ff2fe08daf8e14962939488488e77c\" returns successfully" Jan 29 11:15:07.516927 containerd[1449]: time="2025-01-29T11:15:07.516897468Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:15:07.517010 containerd[1449]: time="2025-01-29T11:15:07.516978671Z" level=info msg="TearDown network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" successfully" Jan 29 11:15:07.517010 containerd[1449]: time="2025-01-29T11:15:07.516994872Z" level=info msg="StopPodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" returns successfully" Jan 29 11:15:07.517412 containerd[1449]: time="2025-01-29T11:15:07.517388129Z" level=info msg="RemovePodSandbox for \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:15:07.517634 containerd[1449]: time="2025-01-29T11:15:07.517493653Z" level=info msg="Forcibly stopping sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\"" Jan 29 11:15:07.517634 containerd[1449]: time="2025-01-29T11:15:07.517584217Z" level=info msg="TearDown network for sandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" successfully" Jan 29 11:15:07.520117 containerd[1449]: time="2025-01-29T11:15:07.519996841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.520117 containerd[1449]: time="2025-01-29T11:15:07.520045403Z" level=info msg="RemovePodSandbox \"d2cc7a6a4e0a1c13e079bf6fddda00a8d9b1551258bc245ea2db4efe6847ea4e\" returns successfully" Jan 29 11:15:07.520474 containerd[1449]: time="2025-01-29T11:15:07.520389417Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" Jan 29 11:15:07.520474 containerd[1449]: time="2025-01-29T11:15:07.520471821Z" level=info msg="TearDown network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" successfully" Jan 29 11:15:07.520612 containerd[1449]: time="2025-01-29T11:15:07.520482461Z" level=info msg="StopPodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" returns successfully" Jan 29 11:15:07.524033 containerd[1449]: time="2025-01-29T11:15:07.523992572Z" level=info msg="RemovePodSandbox for \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" Jan 29 11:15:07.524094 containerd[1449]: time="2025-01-29T11:15:07.524038134Z" level=info msg="Forcibly stopping sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\"" Jan 29 11:15:07.524117 containerd[1449]: time="2025-01-29T11:15:07.524106017Z" level=info msg="TearDown network for sandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" successfully" Jan 29 11:15:07.526505 containerd[1449]: time="2025-01-29T11:15:07.526469878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.526642 containerd[1449]: time="2025-01-29T11:15:07.526529680Z" level=info msg="RemovePodSandbox \"35a4f9560c3370c4f2e237d539b1e38f1e20f506d284ebbcb0e350e3873d37a9\" returns successfully" Jan 29 11:15:07.526949 containerd[1449]: time="2025-01-29T11:15:07.526917297Z" level=info msg="StopPodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\"" Jan 29 11:15:07.527028 containerd[1449]: time="2025-01-29T11:15:07.527011501Z" level=info msg="TearDown network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" successfully" Jan 29 11:15:07.527028 containerd[1449]: time="2025-01-29T11:15:07.527025222Z" level=info msg="StopPodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" returns successfully" Jan 29 11:15:07.527281 containerd[1449]: time="2025-01-29T11:15:07.527239751Z" level=info msg="RemovePodSandbox for \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\"" Jan 29 11:15:07.527323 containerd[1449]: time="2025-01-29T11:15:07.527286713Z" level=info msg="Forcibly stopping sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\"" Jan 29 11:15:07.527366 containerd[1449]: time="2025-01-29T11:15:07.527351236Z" level=info msg="TearDown network for sandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" successfully" Jan 29 11:15:07.529620 containerd[1449]: time="2025-01-29T11:15:07.529576851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.529691 containerd[1449]: time="2025-01-29T11:15:07.529638814Z" level=info msg="RemovePodSandbox \"b3a07e501bedda2a810c45c73d98a053ee08806651480067609e159db972d5dd\" returns successfully" Jan 29 11:15:07.529998 containerd[1449]: time="2025-01-29T11:15:07.529960507Z" level=info msg="StopPodSandbox for \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\"" Jan 29 11:15:07.530069 containerd[1449]: time="2025-01-29T11:15:07.530053231Z" level=info msg="TearDown network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\" successfully" Jan 29 11:15:07.530103 containerd[1449]: time="2025-01-29T11:15:07.530069432Z" level=info msg="StopPodSandbox for \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\" returns successfully" Jan 29 11:15:07.530356 containerd[1449]: time="2025-01-29T11:15:07.530323403Z" level=info msg="RemovePodSandbox for \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\"" Jan 29 11:15:07.530384 containerd[1449]: time="2025-01-29T11:15:07.530353964Z" level=info msg="Forcibly stopping sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\"" Jan 29 11:15:07.530445 containerd[1449]: time="2025-01-29T11:15:07.530425447Z" level=info msg="TearDown network for sandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\" successfully" Jan 29 11:15:07.533006 containerd[1449]: time="2025-01-29T11:15:07.532967196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:15:07.533048 containerd[1449]: time="2025-01-29T11:15:07.533026239Z" level=info msg="RemovePodSandbox \"b259fdff670c5a8bdc94950ea35071e17e573a823e122b2930ec7fd19230703a\" returns successfully" Jan 29 11:15:09.183223 kubelet[2617]: E0129 11:15:09.183183 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:10.678717 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:54060.service - OpenSSH per-connection server daemon (10.0.0.1:54060). Jan 29 11:15:10.725742 sshd[5878]: Accepted publickey for core from 10.0.0.1 port 54060 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:10.727213 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:10.733063 systemd-logind[1426]: New session 20 of user core. Jan 29 11:15:10.740693 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:15:10.874040 sshd[5880]: Connection closed by 10.0.0.1 port 54060 Jan 29 11:15:10.874369 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:10.878307 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:54060.service: Deactivated successfully. Jan 29 11:15:10.880571 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:15:10.881465 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:15:10.882464 systemd-logind[1426]: Removed session 20. Jan 29 11:15:12.833418 kubelet[2617]: I0129 11:15:12.832508 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:15:15.886008 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:34402.service - OpenSSH per-connection server daemon (10.0.0.1:34402). Jan 29 11:15:15.925248 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 34402 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:15.926710 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:15.930582 systemd-logind[1426]: New session 21 of user core. Jan 29 11:15:15.941686 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:15:16.112199 sshd[5899]: Connection closed by 10.0.0.1 port 34402 Jan 29 11:15:16.112555 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:16.115700 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:34402.service: Deactivated successfully. Jan 29 11:15:16.117391 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:15:16.117937 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:15:16.118713 systemd-logind[1426]: Removed session 21.